RightNation.US
News (Home) | Righters' Blog | Hollywood Halfwits | Our Store | New User Intro | Link to us | Support Us

RightNation.US: Uber Accident Follow-up - RightNation.US

Jump to content

-----
Uber's self-driving software detected the pedestrian in the fatal Arizona crash but did not react in time
The company's internal investigation as well as the federal investigation are ongoing.
By Johana Bhuiyan@JMBooyah May 7, 2018, 4:00pm EDT
Vox Media, Inc. All Rights Reserved
Source; excerpts follow (drill down for hyperlinked references):

Quote

As part of its ongoing preliminary internal investigation, Uber has determined that its self-driving software did detect the pedestrian who was killed in a recent crash in Arizona but did not react immediately, according to The Information.

The software detected Elaine Herzberg, a 47-year-old woman who was hit by a semi-autonomous Volvo operated by Uber, as she was crossing the street but decided not to stop right away. That's in part because the technology was adjusted to be less reactive or slower to react to objects in its path that may be "false positives" — such as a plastic bag…

The first sentence in the above paragraph is rather awkward; it was the car that "decided not to stop right away", not the victim. Also, the car was in fully-autonomous mode; I don't think the presence of a safety driver makes the car "semi-autonomous".

Ms. Herzberg had plastic bags attached to the bicycle she was pushing; I wonder if those confused the software. Within the last year or so, I was driving a rental that was equipped with AEB (Automatic Emergency Braking) and a plastic grocery bag was blown into the path of the car. (Directly in front of it; probably ten feet away.) There was an audible warning and red HUD lights flashed onto the windshield but the car didn't stop (or even brake).

Quote

Both Uber and the National Transportation Safety Board launched investigations into the crash to determine whether the software was at fault. Both investigations are ongoing. But people who were briefed on some of the findings of the investigation told The Information that the software may have been the likely cause of the crash.

Self-driving companies are able to tune their technologies to be more or less cautious when it is maneuvering around obstacles on public roads. Typically when the tech — like the computer vision software that is detecting and understanding what objects are — is less sophisticated, companies will make it so the vehicle is overly cautious.

Those rides can be clumsy and filled with hard brakes as the car stops for everything that may be in its path. According to The Information, Uber decided to adjust the system so it didn't stop for potential false positives but because of that was unable to react immediately to Herzberg in spite of detecting her in its path…

(Another awkward sentence, who's editing this article?)

I'm reminded of the first fatal Tesla accident under Autopilot mode: Joshua Brown, May 2016 in Florida. (Autopilot is definitely "semi-autonomous".) Mr. Brown's car ran under the trailer of a semi that was crossing the road in front of him. Tesla said that the car's camera could not distinguish between the white trailer and the bright sky behind it, and it was also mentioned that Tesla had its system's radar deflected downward to avoid false positives from things like overhead highway signage.

I can understand, from a driver's or passenger's perspective, how sudden braking for "no good reason" would be frustrating and discomforting. Yet, as these technologies are developed and tweaked, wouldn't it be better to err on the side of caution? If the rule of thumb is to adjust sensitivity downwards until somebody dies, that's not acceptable.

This next bit was an eye-opener for me:

Quote

… Herzberg's death has ushered in an important debate about Uber's safety protocols as well as a broader debate about the safety of testing semi-autonomous technology on public roads. For example, companies typically have two safety drivers — people trained to take back control of the car — until they are completely confident in the capability of the tech. However, Uber only had one vehicle operator.

That's in spite of the self-driving technology's slow progress relative to that of other companies, like Waymo. As of February 2017, the company's vehicle operators had to take back control of the car an average of once every mile, Recode first reported. As of March of 2018, the company was still struggling to meet its goal of driving an average of 13 miles without a driver having to take back control, according to the New York Times.

Alphabet's self-driving company, Waymo, had a rate of 5,600 miles per intervention in California. (At the time, Uber pointed out this is not the only metric by which to measure self-driving progress.)…

I hadn't thought about comparing "intervention rates" between competing developers and believed that there was some kind of minimum safety threshold before companies would be allowed to beta-test their systems on public roads. Color me naïve.
0
  Like

1 Comments On This Entry

There's no perfect answer. "I can understand, from a driver's or passenger's perspective, how sudden braking for "no good reason" would be frustrating and discomforting." Sudden unexpected braking is dangerous because it greatly increases the chance of being rear-ended. Yes, the person behind you _should_ have a large enough following distance to be safe even if you have to stop suddenly and unexpectedly, but I know and you know that isn't always the case -- I don't want to have a semi with an inattentive driver drive over me because my car slammed on the brakes for a shadow or a blowing McDonalds cup.
0
Page 1 of 1