I’m not surprised by this:
THE POLICE HAVE released video showing the final moments before an Uber self-driving car struck and killed 49-year-old Elaine Herzberg, who was crossing the street, on Sunday night in Tempe, Arizona.
The video includes views of the human safety driver and her view of the road, and it shows Herzberg emerging from the dark just seconds before the car hits her. And based on this evidence, it’s difficult to understand why Uber’s self-driving system—with its lidar laser sensor that sees in the dark—failed to avoid hitting Herzberg, who was slowly, steadily crossing the street, pushing a bicycle.
The video itself is here:
Tempe Police Vehicular Crimes Unit is actively investigating
the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ
— Tempe Police (@TempePolice) March 21, 2018
As I said after that self-driving Tesla crashed in 2016 having failed to differentiate between the sky and the side of a lorry:
The mistake people make is to assume every action in driving is one of simple measurement, and conclude that computers are far better at measuring things than humans are in terms of speed and accuracy. However, driving is often about judgement as opposed to pure measurement (and this is why it takes a while to become a good driver, judgement improves with experience), and much of this judgement relates to the interpretation of visual information. The recognition of objects by computers is still only in its infancy, and nowhere near robust enough to deploy in any safety-critical system. Given the pace of development of other areas of computing abilities, such as sound recognition in apps like Shazam, object recognition is seriously lagging behind and I suspect for very good reasons: software, being made up of pre-programmed algorithms, simply isn’t very good at it. And even then object recognition isn’t enough, a self-driving car would need to be able to not only accurately acquire visual data but also interpret it before initiating an action (or not). Computers are unable to do this for anything other than the most basic of pre-determined objects and scenarios, while the environment in which humans operate their cars is fiendishly complex.
This latest incident doesn’t do much to convince me otherwise. And here’s Longrider:
A human driver can look at a situation developing and reading body language and little subliminal cues to react long before the situation has become dangerous, long before the AI has even suspected that a situation is developing.
This woman was unlawfully killed and someone, somewhere should be prosecuted for it. If it was a human driver, then that would be straightforward. However, now we get into who programmed the damned thing, how was it set up, what failed? But bottom line here is that someone was criminally negligent and should be doing jail time.
After the Tesla crash, I said:
What does amaze me though is that computers are being put into cars with the belief that they can do things they demonstrably can’t. A hefty lawsuit and tighter regulations can’t be too far away.
Both politicians and the public seem keen on self-driving cars being rolled out onto public roads while still dangerously unsafe, wooed by the idea that “technology” is safer than humans and any errors – even those resulting in someone’s death – can be rectified with a few lines of code. Someone’s going to make a lot of money out of these things, and it won’t be the manufacturers.