Death by Self-Driving Car

I’m not surprised by this:

THE POLICE HAVE released video showing the final moments before an Uber self-driving car struck and killed 49-year-old Elaine Herzberg, who was crossing the street, on Sunday night in Tempe, Arizona.

The video includes views of the human safety driver and her view of the road, and it shows Herzberg emerging from the dark just seconds before the car hits her. And based on this evidence, it’s difficult to understand why Uber’s self-driving system—with its lidar laser sensor that sees in the dark—failed to avoid hitting Herzberg, who was slowly, steadily crossing the street, pushing a bicycle.

The video itself is here:

As I said after that self-driving Tesla crashed in 2016 having failed to differentiate between the sky and the side of a lorry:

The mistake people make is to assume every action in driving is one of simple measurement, and conclude that computers are far better at measuring things than humans are in terms of speed and accuracy.  However, driving is often about judgement as opposed to pure measurement (and this is why it takes a while to become a good driver, judgement improves with experience), and much of this judgement relates to the interpretation of visual information.  The recognition of objects by computers is still only in its infancy, and nowhere near robust enough to deploy in any safety-critical system.  Given the pace of development of other areas of computing abilities, such as sound recognition in apps like Shazam, object recognition is seriously lagging behind and I suspect for very good reasons: software, being made up of pre-programmed algorithms, simply isn’t very good at it.  And even then object recognition isn’t enough, a self-driving car would need to be able to not only accurately acquire visual data but also interpret it before initiating an action (or not).  Computers are unable to do this for anything other than the most basic of pre-determined objects and scenarios, while the environment in which humans operate their cars is fiendishly complex.

This latest incident doesn’t do much to convince me otherwise. And here’s Longrider:

A human driver can look at a situation developing and reading body language and little subliminal cues to react long before the situation has become dangerous, long before the AI has even suspected that a situation is developing.

Quite. Also:

This woman was unlawfully killed and someone, somewhere should be prosecuted for it. If it was a human driver, then that would be straightforward. However, now we get into who programmed the damned thing, how was it set up, what failed? But bottom line here is that someone was criminally negligent and should be doing jail time.

After the Tesla crash, I said:

What does amaze me though is that computers are being put into cars with the belief that they can do things they demonstrably can’t.  A hefty lawsuit and tighter regulations can’t be too far away.

Both politicians and the public seem keen on self-driving cars being rolled out onto public roads while still dangerously unsafe, wooed by the idea that “technology” is safer than humans and any errors – even those resulting in someone’s death – can be rectified with a few lines of code. Someone’s going to make a lot of money out of these things, and it won’t be the manufacturers.

Share

53 thoughts on “Death by Self-Driving Car

  1. I remember when people could beat computers at chess, now they can’t.
    It is not unreasonable to think that one day computers will be safer than driving cars than people and work towards it.

  2. “Both politicians and the public seem keen on self-driving cars being rolled out onto public roads while still dangerously unsafe”

    They’re both dangerously unsafe in different ways. I think the real question is whether or not they are more dangerously unsafe than the general public, and my gut feeling is “probably not”.

  3. I remember when people could beat computers at chess, now they can’t.

    I think the better analogy is whether a computer can write a book.

  4. I think the real question is whether or not they are more dangerously unsafe than the general public, and my gut feeling is “probably not”.

    My gut feeling is 100% self-driving cars, exposed to the same conditions as ordinary cars, will be 1 or 2 orders of magnitude less safe.

  5. So we have peak energy, record cooling, record global carbon emissions and record Chinese coal combustion and then

    a fucking solar powered autonomous car crashes at night.

    Can life get any better than this?

  6. Visions of the future always seem to be statist visions. SciFi seems devoid of profit, of individual success or failure, of choice. Hollywood sees this as a positive.

    What does that have to do with electric cars? The lefties LOVE electric cars because they tick all their boxes. And the biggest box to tick of all is that self driving cars can ever only really work on a road system that is MUCH more tightly controlled than is currently the case. If unpredictable humans and programmable machines share roadspace there’ll be carnage. So…the answer is to regulate the fuck out of road usage. They don’t want you free to drive yourself. They want you sitting in a box you don’t control but they do. You must lose your freedom. For the sake of the children!

  7. But computers can do a great job of writing postmodernist academic papers…

    Heh. I was going to add that a computer set to the task of writing would likely come up with something indistinguishable from a Polly Toynbee article or Hollywood script.

  8. I drive the same model of car that crashed – the Volvo systems that are fitted as standard are excellent (they take a bit of getting used to) and slam on the anchors as soon as a hazard presents itself.

    Assuming you’ve picked it up yourself it’s a very odd sensation to feel the brake pedal falling away under your foot as the car beats you to it and everyone’s seatbelts tighten.

    Presumably the Uber systems have over ridden or displaced these features but having had a few people pull out on us at junctions or pedestrians step out I’d be surprised if a standard vehicle wouldn’t have picked up the woman as she crossed and slowed the car significantly if not stopping it completely.
    I’d be very interested to know what the private opinion of all this is at Volvo.

    I eagerly await Tim’s follow up post urging me to attempt driving it into a bridge stanchion on the M6 before reporting back.

  9. Of equal interest is that the ‘human safety driver’, Rafaela, had a string of traffic violation convictions and one for a little thing called armed robbery!

    Oh, and ‘she’ used to be a ‘he’.

    Go figure.

  10. Thanks Recusant, life does and did actually get better based on your update, cheers.

  11. People don’t want us to do animal testing in labs, but sending a car out onto the road that has proven to kill people, no problem.

    Of course, it would be cool for this to work, but it’s killing people. It’s not like astronauts who volunteer to take on the risk.

  12. Chess is a game of decision tree calculation. The surprising thing about AI theorists was that they assumed that AI beating Kasparov was an impressive feat and the pinnacle of AI game playing, when looking back it should have been obvious that beating the best human player of Chess (at the time, Carlson is better than he was now) was an impressive but ultimately achievable feat.

    AI winning jeopardy and GO are much more significant but still, a game.

    I am not techno-pessimist but I do tire of these techno-evangelists thinking ‘tech’ can do everything or will solve ultimately political, psychological and human relation problems. It wont. Things will be faster and easier in the future, but there will be a limit and corresponding costs.

  13. Regardless of the quality of the safety driver, it seems inherently difficult for a human to keep engaged in a task where they have ordinarily no input. Either their mind will wander, or they become spectators and omit to act when needed.

  14. I drive the same model of car that crashed – the Volvo

    Wait, you drive a Volvo?!

    A Volvo?

    I knew you’d had kids, but when did the grandchildren arrive? Have you taken it over 20mph yet, or is that a bit dangerous? And when do you start your job as a geography teacher?

  15. Of equal interest is that the ‘human safety driver’, Rafaela, had a string of traffic violation convictions and one for a little thing called armed robbery!

    And the woman she hit was a homeless drug dealer!

  16. it seems inherently difficult for a human to keep engaged in a task where they have ordinarily no input. Either their mind will wander, or they become spectators and omit to act when needed.

    You appear to be describing my behaviour in meetings. Are you a colleague, by any chance?

  17. It’s all fun and games until the lawyers come out.

    That was my thought from the first time I heard the word autonomous car.

    Btw, ever seen a self driving subway or light rail train? Me neither, unless it’s an airport tram. And those are on rails, mostly away from humans.

  18. This woman was unlawfully killed and someone, somewhere should be prosecuted for it. If it was a human driver, then that would be straightforward. However, now we get into who programmed the damned thing, how was it set up, what failed? But bottom line here is that someone was criminally negligent and should be doing jail time.

    No, that’s not true at all. Just because someone gets hit by a car does not mean that there was criminal negligence – or any negligence at all – on the part of the driver. It all depends on the circumstances under which the accident occurred.

    Looking at the dashcam video of the accident, I don’t believe that a human driver would have been prosecuted for this accident. There’s also some good information about the somewhat confusing design of the road at the link.

  19. Btw, ever seen a self driving subway or light rail train?

    They have self-driving trains on the Paris metro, and they are very good. But you don’t see mainline or freight trains that are driverless. True, much of this is down to unions (see the London Underground) but I’ve always thought that before self-driving cars we’ll see self-driving trains, and we’ve not.

  20. I’m with JerryC on this (and I had a debate with Longrider on his post on exactly the same point).

    Watching that video I’d say there was a very low probability that a human driver would have reacted any sooner or less fatally. And also that a human driver would not be held responsible for hitting a pedestrian who walked out in front of moving traffic, at night, on a dual carriage way.

    Regardless of ones views on the sense of driverless cars, I cannot say that this accident would have gone down any different if the human had been fully in charge, and thus its not the computer’s ‘fault’.

  21. Watching that video I’d say there was a very low probability that a human driver would have reacted any sooner or less fatally.

    You’re probably right, yes.

  22. A closed and linear sysetm like the DLR or Underground is relatively easy to automate. MUCH harder to automate where trains share lines with manually controlled traffic. The Circle line will be the last to automate as it shares so much track with other lines.
    And trains overall are going to be WAY easier to automate than cars they sit on tracks! Duh. Imagine the fun and games driverless cars would have created during all the snow earlier in the month.

  23. Spengler – Asia Times:

    The Information, a consulting organization that showcases industry specialists, recently held a conference call on self-driving where one expert warned: “You have to remember that self-driving does not work, at least in… a highly functional, driverless robotaxi sense. It does not work. And there are many folks clamoring for architectures to get there. Again, think back to flight. Do you ever watch those YouTube videos where the guy pumping the umbrella and the dude with a big corkscrew and the person with the bird wings? I would think of it more that way. It is left to be seen which one of those architectures gets you to a useful outcome.”

    America simply doesn’t have the infrastructure to support autonomous vehicles, the expert added. China is another matter ….

    http://www.atimes.com/article/facebook-uber-end-great-american-tech-delusion/

  24. My gut feeling is 100% self-driving cars, exposed to the same conditions as ordinary cars, will be 1 or 2 orders of magnitude less safe.

    Less safe than what, or rather where?

    UK road deaths per 100k population per year – 2.9
    Italy road deaths per 100k population per year – 6.1
    US road deaths per 100k population per year – 10.6
    Malawi road deaths per 100k population per year – 35

    The technology will be viable sooner in the US, due to the natural propensity of Yanks to kill each other if left to their own devices.

    If only it were cost-effective to introduce driverless cars to Malawi now.

  25. the ‘human safety driver’, Rafaela, had a string of traffic violation convictions and one for a little thing called armed robbery! Oh, and ‘she’ used to be a ‘he’.

    And the woman she hit was a homeless drug dealer!

    WTAF?

  26. Ordinary cars killed a lot of people when they were new and just beginning to replace the horse & cart. Pedestrians and cyclists will adjust to the new reality: instead of walking out into the road and assuming that the human driver will stop, they simply won’t take the risk.

    There will be excess deaths while road users adjust to the new risk levels. But isn’t this one of those you-can’t-make-an-omelette-without-breaking-eggs situations? Human-driven cars are at a local maximum; we can’t improve matters without first going through the pain of making things a bit worse.

    It’s easy for me to say that as an engineer though. I’d be less keen if it was my family member who was killed by a self-driving car.

  27. There will be excess deaths while road users adjust to the new risk levels.

    We need a robot out the front with a red flag.

  28. Andrew M

    “But isn’t this one of those you-can’t-make-an-omelette-without-breaking-eggs situations?”

    When Lenin said that, GK Chesterton answered,,,”yes, but where’s the omelette?”.

    I think we might be saying the same in the future.

  29. Oh, boy, this again.

    1) Self-driving cars are not yet reliable enough to be beta tested on public roads.
    1a) They will eventually be, though, and will be safely usable on public roads within 5-7 years. I’ve staked money on this.
    2) They’ll be demonstrably safer than human drivers in the aggregate. When they fail, they’ll fail in different ways than human drivers do, and confirmation bias will cause people to continue shrieking about driverless cars while ignoring the actual evidence.
    3) Most subway systems are autonomously controlled; the “conductor” is there to mollify the unions and the bureaucratic boffins. Two university friends of mine work for a company that makes the relevant software, and their release conditions are mind-bogglingly strict, thanks to the potential consequences of software failure.
    4) Clarke’s First Law applies to anyone claiming that AI can’t do something humans can.

  30. “Watching that video I’d say there was a very low probability that a human driver would have reacted any sooner or less fatally.”

    That’s more about the camera, possibly the angle it was shooting from. Maybe a rather cheap dashcam.

    Some people have filmed the same place at the same time and it’s nothing like as dark as that video.

    https://www.youtube.com/watch?v=1XOVxSCG8u0

  31. Daniel Ream,

    “They will eventually be, though, and will be safely usable on public roads within 5-7 years. I’ve staked money on this.”

    What level are we talking? I can drink a bottle of Burgundy and not have to touch the wheel? It’ll work when I drive from a country restaurant in Malmesbury where I might face deer, horses, floods on the road?

    They’ll work in rain, fog and ice?

    It’ll safely overtake tractors on the road?

    It’ll detect if a child is running by the side of the road and slow down in case it suddenly runs into the road?

    It’ll detect cyclists and give them space?

    If an ambulance needs to get past, it will detect the sirens and pull in, or maybe accelerate to the next sensible point in the road?

    If a policeman flags the car for a police survey, it’ll pull in?

    It will cope with horses on the road and whether they are behaving erratically or not?

    Are you saying they’re going to do all of that (as a minimum, I haven’t thought of every edge case) in 5-7 years?

  32. “Bardon, you should check out the victim too, only in America….well maybe London too.”

    It’s as if autonomous cars are god’s way of cleaning up all the filth from the streets, I like them a lot.

  33. When a human being controls a car, there is someone to pay your insurance excess. If they leave the scene, there is someone responsible for the police to call (if you have a witness and a number plate). When a human driver kills someone, there is someone to blame.

    How does that work with a driverless car when the vehicles are more common? Because at that point you’re not having manufacturers rolling over and accepting every claim.

  34. @Notvelty

    They are struggling with the law on this for merchant shipping as well. The master’s autonomy was the key tenet of maritime law, and it depended on him being aboard.

    If the tanker is being piloted from an Ops centre in Geneva, what then? What if the pilot is a computer programme?

    Joshua Rosenberg have a very good talk on this at LISW last year, and it’s a really fraught area. The technical challenges of autonomous vessels are a cakewalk, compared to the legalities…

  35. Patrick above calls it right.

    The polipigs and bureaucrats are pushing the driverless shite because they want private transport made public.

    Want to go to a demo they don’t want you at? Or go somewhere that they don’t want you to see/go?

    “Sorry–the system is busy at this time. Please try again later”.

    Unfortunately for them they are a long way off a working system. And long may it remain so.

  36. Are you saying they’re going to do all of that (as a minimum, I haven’t thought of every edge case) in 5-7 years?

    It will cope with all of those things as well or better than humans do[1], yes. It doesn’t have to reach some ridiculous standard of perfection, just be as good as human drivers are. Somewhere back in Tim’s archives is the bet I made. Put your money on the table or pipe down.

    [1] I grew up in a town near a large Old Order Mennonite community, and if you think human beings are particularly good at handling these kinds of conditions, I have some obituaries to show you

  37. “A Volvo?
    I knew you’d had kids, but when did the grandchildren arrive? ”

    For a while I ran a Volvo 480 Turbo. Has to have been one of the quickest production cars I’ve ever owned. Great acceleration with hardly any turbo lag & the roadholding was superb. Even provoking it, it relentlessly stuck to the road like shit to a blanket. It’s failing was very un-Volvo, which generally have the durability of a housebrick. Remarkably fragile. Odds & sods kept falling off of it & for some unaccountable reason it lost its reverse gear.

    Human driver v AI?
    Both have their failings but they’re different failings. Human sensory equipment really doesn’t work out much further than 20 feet. The eyes are too close together to judge distances & anyone who’s been taught to fly will know how unreliable our judgement of acceleration & vector are. Why you’re told to ignore what your senses are telling you & use the bloody instruments. The brain gets round the failings in data input by a great deal of extrapolation. Mostly gained from learning.
    A computer controlled car has the potential to be working with far better data input. Optics see much further into the infra-red than the human eye, so that scene needn’t look so dark. And the information input doesn’t need to be restricted to the passive. One would expect at least some sort of radar which adds relative velocity measurement through doppler.
    It’s worth watching that video from the perspective of not knowing what’s going to happen & what you’d actually see as a driver. Human vision produces fine detail in a cone only a few degrees wide. Equivalent to looking through a knothole in a fence. Peripherally it degrades both in focus & colour information. As a driver you’d likely be prioritising trying to work out what’s coming up in that zone out beyond the reach of the headlights. All those lights towards the horizon, which will be expanding as they get closer, confuse the peripheral vision, which sees them all at the same distance as the woman with the bike. If her speed of transit across the visual field coincides with their’s, she effectively disappears unless that knothole of central vision passes over her.
    Still reckon the human scores better?

  38. Still reckon the human scores better?

    Yeah. Ask a computer to pick out a suitable woman for you from a lineup. 😉

  39. On past personal performance I’m becoming increasingly convinced of the superiority of electronic data processing…

    Unless i can get myself some sort of Brasilena security patch installed.

  40. Are you sure a Human driver would not have run over that cyclist?

    It is called ‘trade off’.

    We trade off the many economic benefits of self-drive cars with occasional drawbacks such as the instruments will not always get it right.

    We do the same with Human driven cars, we trade off the many economic benefits with the frequent drawbacks such as poor driving standards, carelessness, recklessness and the deaths and injuries.

    And then we have a bit of irony: ‘The mistake people make is to assume every action in driving is one of simple measurement, and conclude that computers are far better at measuring things than humans are…’

    But when trains crash because of driver error, going through a red signal, up goes the cry if only there had been automatic computerised warning(braking so it was not reliant on Human input.

    Then we hear how many times plane crashes have been caused by pilots over riding or ignoring the instruments/computer.

    It’s not a perfect World. And it is very remiss to judge machine failure against Human performance when assuming Human performance is flawless.

  41. BiS “Human vision produces fine detail in a cone only a few degrees wide. Equivalent to looking through a knothole in a fence. Peripherally it degrades both in focus & colour information.”

    The eye may appear to be just like a camera but the visual system is not, the eye scans, the optic nerve processes, and the brain constructs the scene. So what you ‘see’ has much greater dynamic range, depth of field, angle of view, and definition than that camera view. More a 3D mental model than a picture on a screen. Now all that eye-brain processing and interpretation can get things wrong, but not nearly as wrong as a ‘AI’ program.

  42. But when trains crash because of driver error, going through a red signal, up goes the cry if only there had been automatic computerised warning(braking so it was not reliant on Human input.

    Because a train stopping at a red light is something easily automated, and people wonder why it’s not. This is not the case for a self-driving car trying to interpret an entire, moving scene in front of it.

    Then we hear how many times plane crashes have been caused by pilots over riding or ignoring the instruments/computer.

    Why bring the French into this? We can’t make allowances for them, FFS.

Comments are closed.