Uber vehicle reportedly saw but ignored woman it struck

The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death.

The only possibilities that made sense were:

  • A: Fault in the object recognition system, which may have failed to classify Herzberg and her bike as a pedestrian. This seems unlikely since bikes and people are among the things the system should be most competent at identifying.
  • B: Fault in the car’s higher logic, which makes decisions like which objects to pay attention to and what to do about them. No need to slow down for a parked bike at the side of the road, for instance, but one swerving into the lane in front of the car is cause for immediate action. This mimics human attention and decision making and prevents the car from panicking at every new object detected.

The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.

This is not good.

Autonomous vehicles have superhuman senses: lidar that stretches out hundreds of feet in pitch darkness, object recognition that tracks dozens of cars and pedestrians at once, radar and other systems to watch the road around it unblinkingly.

But all these senses are subordinate, like our own, to a “brain” — a central processing unit that takes the information from the cameras and other sensors and combines it into a meaningful picture of the world around it, then makes decisions based on that picture in real time. This is by far the hardest part of the car to create, as Uber has shown.

It doesn’t matter how good your eyes are if your brain doesn’t know what it’s looking at or how to respond properly.

Update: Uber issued the following statement, but did not comment on the claims above:

We’re actively cooperating with the NTSB in their investigation. Out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident. In the meantime, we have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture. Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.

As this is a situation without precedent, the NTSB and other reports may be particularly difficult to create and slow to issue, and it’s not abnormal for a company or individual to hold off from revealing too much information ahead of publication.