RightNation.US
News (Home) | Righters' Blog | Hollywood Halfwits | Our Store | New User Intro | Link to us | Support Us

RightNation.US: Redefining safety for self-driving cars - RightNation.US

Jump to content

-----
Redefining safety for self-driving cars
November 28, 2017 by Srikanth Saripalli, The Conversation
© Phys.org 2003 - 2017, Science X network
Source; excerpts follow (drill down for hyperlinked references):

The editorialist is a self-described autonomous systems researcher.

Quote

In early November, a self-driving shuttle and a delivery truck collided in Las Vegas… [T]he self-driving shuttle noticed a truck up ahead was backing up, and stopped and waited for it to get out of the shuttle's way. But the human truck driver didn't see the shuttle, and kept backing up. As the truck got closer, the shuttle didn't move – forward or back – so the truck grazed the shuttle's front bumper.

As a researcher working on autonomous systems for the past decade, I find that this event raises a number of questions: Why didn't the shuttle honk, or back up to avoid the approaching truck? Was stopping and not moving the safest procedure? If self-driving cars are to make the roads safer, the bigger question is: What should these vehicles do to reduce mishaps? In my lab, we are developing self-driving cars and shuttles. We'd like to solve the underlying safety challenge: Even when autonomous vehicles are doing everything they're supposed to, the drivers of nearby cars and trucks are still flawed, error-prone humans.

There are two main causes for crashes involving autonomous vehicles. The first source of problems is when the sensors don't detect what's happening around the vehicle. Each sensor has its quirks: GPS works only with a clear view of the sky; cameras work with enough light; lidar can't work in fog; and radar is not particularly accurate. There may not be another sensor with different capabilities to take over. It's not clear what the ideal set of sensors is for an autonomous vehicle – and, with both cost and computing power as limiting factors, the solution can't be just adding more and more.

The second major problem happens when the vehicle encounters a situation that the people who wrote its software didn't plan for – like having a truck driver not see the shuttle and back up into it. Just like human drivers, self-driving systems have to make hundreds of decisions every second, adjusting for new information coming in from the environment. When a self-driving car experiences something it's not programmed to handle, it typically stops or pulls over to the roadside and waits for the situation to change. The shuttle in Las Vegas was presumably waiting for the truck to get out of the way before proceeding – but the truck kept getting closer. The shuttle may not have been programmed to honk or back up in situations like that – or may not have had room to back up.

The challenge for designers and programmers is combining the information from all the sensors to create an accurate representation – a computerized model – of the space around the vehicle. Then the software can interpret the representation to help the vehicle navigate and interact with whatever might be happening nearby. If the system's perception isn't good enough, the vehicle can't make a good decision. The main cause of the fatal Tesla crash was that the car's sensors couldn't tell the difference between the bright sky and a large white truck crossing in front of the car.

… If autonomous vehicles are to fulfill humans' expectations of reducing crashes, it won't be enough for them to drive safely. They must also be the ultimate defensive driver, ready to react when others nearby drive unsafely.

… Before autonomous vehicles can really hit the road, they need to be programmed with instructions about how to behave when other vehicles do something out of the ordinary. Testers need to consider other vehicles as adversaries, and develop plans for extreme situations. For instance, what should a car do if a truck is driving in the wrong direction? At the moment, self-driving cars might try to change lanes, but could end up stopping dead and waiting for the situation to improve. Of course, no human driver would do this: A person would take evasive action, even if it meant breaking a rule of the road, like switching lanes without signaling, driving onto the shoulder or even speeding up to avoid a crash.

Self-driving cars must be taught to understand not only what the surroundings are but the context: A car approaching from the front is not a danger if it's in the other lane, but if it's in the car's own lane circumstances are entirely different. Car designers should test vehicles based on how well they perform difficult tasks, like parking in a crowded lot or changing lanes in a work zone. This may sound a lot like giving a human a driving test – and that's exactly what it should be, if self-driving cars and people are to coexist safely on the roads.

Read full editorial


I've written a couple of times about the fatal Tesla crash in Florida, and I'm surprised and disappointed that the author chose it as a reference herein. That Tesla was NOT an autonomous vehicle, and "the main cause" of the accident was NOT "the car's sensors couldn't tell the difference between the bright sky and a large white truck…" After multiple investigations, the most reasonable "main cause" was the driver's inattention.

The author surely knows the difference between a truly autonomous vehicle and one with driver-assist technology; yet he makes the error of conflation early:

Quote

It's not the first collision involving a self-driving vehicle. Other crashes have involved Ubers in Arizona, a Tesla in "autopilot" mode in Florida and several others in California. But in nearly every case, it was human error, not the self-driving car, that caused the problem.

Well then, why did you blame "sensors" for the Tesla crash later in your editorial? This seems inconsistent to me. And again, in a paragraph I cited above:

Quote

… If autonomous vehicles are to fulfill humans' expectations of reducing crashes, it won't be enough for them to drive safely. They must also be the ultimate defensive driver, ready to react when others nearby drive unsafely.

Perhaps this is my bias but I infer the "others" in the above paragraph are human drivers. (Remember this missive opened by describing a human truck driver backing into an autonomous shuttle.)

I think Saripalli makes some excellent points in this piece; I largely agree and recognize the challenges faced by automotive, autonomous systems developers. My point is (and has been): If there's a DRIVER in the car then it's NOT autonomous.
0
  Like

2 Comments On This Entry

In order for autonomous vehicle to "improve" safety, it also require a concurrent sea-change in the driving habits of all the 'real' drivers out there.

POP QUIZ: Posted speed limit on a certain interstate through most a major southern city is 55 MPH. The norm is actually 70-80 most times. In that circumstance, what's the most dangerous car on the road?

ANSWER: The car staying within the speed limit.

Welcome to Atlanta. Or Miami. Or Knoxville TN.

Problem is, an autonomous vehicle must necessarily be programmed to stay within the law. Put one of these in Atlanta and, well, it would create such a CARtastrophe of pile-ups that it would probably shut down I-75 all the way from the FL to TN border.

Do you remember the '70s when a certain slot car manufacturer (Tyco?)(Mattel?) upped the ante by introducing a 'Jam Car', a car deliberately going slower so as to create an obstacle to swerve around? THAT's what an autonomous vehicle would be in Atlanta or Miami.

And don't even get me started on stop signs and Yellow lights.

YEAH, I know the rules: Yellow means slow down, prepare to stop. Go ahead, try that in Atlanta. Afterwards, as you're recovering in the hospital after being rear-ended, I can recommend a good physical therapist for your recovery and a good new car dealership for your replacement vehicle.

Wanna convince me that autonomous vehicles are "safe"??? I'll throw down the gauntlet here: Put one in (your choice) Talladega / Daytona / Bristol and, even without winning, hold it's own without crashing. Because THAT's what real-life Atlanta traffic is like on I-285, with every pickup and BMW driver apparently believing they're the second coming of Dale Earnhardt.

Go ahead, put an autonomous vehicle into THAT. I'll bid $0.93/lb for it's scrap metal value.
0
What’s not being said is that the ultimate, long-term goal of autonomous vehicles MUST be universal usage. Anyone still driving would have to become algorithm experts on the behavioral profiles of computer-driven vehicles. For those of us still inclined to take responsibility for our own actions, the randomness and unpredictability of other drivers’ actions is “business as usual”; it’s part of the experience. If you don’t want to actually DRIVE the car, then get someone else to do it for you or take public transportation.

0
Page 1 of 1