Death by Self-Driving Car

I’m not surprised by this:

THE POLICE HAVE released video showing the final moments before an Uber self-driving car struck and killed 49-year-old Elaine Herzberg, who was crossing the street, on Sunday night in Tempe, Arizona.

The video includes views of the human safety driver and her view of the road, and it shows Herzberg emerging from the dark just seconds before the car hits her. And based on this evidence, it’s difficult to understand why Uber’s self-driving system—with its lidar laser sensor that sees in the dark—failed to avoid hitting Herzberg, who was slowly, steadily crossing the street, pushing a bicycle.

The video itself is here:

As I said after that self-driving Tesla crashed in 2016 having failed to differentiate between the sky and the side of a lorry:

The mistake people make is to assume every action in driving is one of simple measurement, and conclude that computers are far better at measuring things than humans are in terms of speed and accuracy.  However, driving is often about judgement as opposed to pure measurement (and this is why it takes a while to become a good driver, judgement improves with experience), and much of this judgement relates to the interpretation of visual information.  The recognition of objects by computers is still only in its infancy, and nowhere near robust enough to deploy in any safety-critical system.  Given the pace of development of other areas of computing abilities, such as sound recognition in apps like Shazam, object recognition is seriously lagging behind and I suspect for very good reasons: software, being made up of pre-programmed algorithms, simply isn’t very good at it.  And even then object recognition isn’t enough, a self-driving car would need to be able to not only accurately acquire visual data but also interpret it before initiating an action (or not).  Computers are unable to do this for anything other than the most basic of pre-determined objects and scenarios, while the environment in which humans operate their cars is fiendishly complex.

This latest incident doesn’t do much to convince me otherwise. And here’s Longrider:

A human driver can look at a situation developing and reading body language and little subliminal cues to react long before the situation has become dangerous, long before the AI has even suspected that a situation is developing.

Quite. Also:

This woman was unlawfully killed and someone, somewhere should be prosecuted for it. If it was a human driver, then that would be straightforward. However, now we get into who programmed the damned thing, how was it set up, what failed? But bottom line here is that someone was criminally negligent and should be doing jail time.

After the Tesla crash, I said:

What does amaze me though is that computers are being put into cars with the belief that they can do things they demonstrably can’t.  A hefty lawsuit and tighter regulations can’t be too far away.

Both politicians and the public seem keen on self-driving cars being rolled out onto public roads while still dangerously unsafe, wooed by the idea that “technology” is safer than humans and any errors – even those resulting in someone’s death – can be rectified with a few lines of code. Someone’s going to make a lot of money out of these things, and it won’t be the manufacturers.

Liked it? Take a second to support Tim Newman on Patreon!
Share

53 thoughts on “Death by Self-Driving Car

  1. @djc
    That’s precisely what I’m talking about. That humans construct their model of the world through limited data input. It works very well when doing what it was evolved for. Humans weren’t evolved to drive vehicles at speed.
    In flying one learns to be wary of aircraft closing on a constant bearing. Essentially, from the pilot’s point of view, the other aircraft is stationary in the sky & just gets apparently larger. That’s a log function.. Right at the end, it gets very bigger, very fast. Until then, the eye doesn’t pick it out because the visual model discerns objects transiting across the field of vision, not coming straight at it.
    I was trying to visualise what a driver would see, in that video. It’s likely, the woman with the bike’s bearing relative to the driver would have been similar to the lights see towards the horizon. There’s very little depth of field in human vision. The modelling deduces distance by a combination of changes of relative bearing & recognising apparent size compared with learned real size of comparable objects (You only have to look at some optical illusion illustrations to realise how imperfect that can be)
    So, from a drivers point of view, there’s little to distinguish the lights & the woman passing across the field of view. They’re treated as being at the same distance Until the woman’s relative bearing diverges rapidly at the last instance.

  2. You only have to look at some optical illusion illustrations to realise how imperfect that can be

    Are we back to your chickas again?

  3. Daniel Ream,

    “It will cope with all of those things as well or better than humans do[1], yes. It doesn’t have to reach some ridiculous standard of perfection, just be as good as human drivers are. Somewhere back in Tim’s archives is the bet I made. Put your money on the table or pipe down.”

    Level 5 within 7 years? I’m not going to find the bet. Can you repeat it with terms, odds etc?

Comments are closed.