NASA explains why self-driving cars may not be in your future

[ This is a shortened, reworded summary of the article below.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”]

Pavlus, John. July 18, 2016. What NASA Could Teach Tesla about Autopilot’s Limits. Scientific American.

Decades of research have warned about the human attention span in automated cockpits

After the Tesla’s Model S in auto-pilot mode crashed into a truck and killed its driver, the safety of self-driving cars has been questioned due to 3 factors: the autopilot system didn’t see the truck coming, the driver didn’t notice the truck either, so neither applied the brakes.

Who better knows the dangers than NASA, where automation in cockpits has been studied for decades (i.e. cars, space shuttle, airplane).  They describe how connected a person is to a decision-making process as “in the loop”, which, say driving a car yourself, means Observe, Orient, Decide, Act (OODA).  But if your car is in autopilot but you can still interact with the system to brake or whatever, you are “ON the loop”.

Airplanes fly automated, with pilots observing.  But this is very different from a car.  If something goes wrong the pilot has many minutes to react. The plane is 8 miles in  the air.

But in a car, you have just ONE SECOND.  That requires a faster reflex reaction time than a test pilot. There’s almost no margin for error.  This means you might as well be driving manually since you still have to be paying full attention when the car is on autopilot, not sitting in the back seat reading a book.

Tesla tries to get around this by having the autopilot make sure the driver’s hands are on the wheel and visual and audible alerts are triggered if not.

But NASA has found this doesn’t work because the better the auto-pilot is, the less attention the driver pays to what’s going on.  It is tiring, and boring, to monitor a process that does well for a long time, and was called a “vigilance decrement” as far back as 1948. Experiments back then showed that after just 15 minutes vigilance drops off.

So the better the system the more we’re likely to stop paying attention.  But no one would want to buy a self-driving car that they may as well be driving. The whole point is that dangerous stuff we’re already doing now like changing the radio, eating, and talking on the phone would be less dangerous in autopilot mode.

These findings expose a contradiction in systems like Tesla’s Autopilot. The better they work, the more they may encourage us to zone out—but in order to ensure their safe operation they require continuous attention. Even if Joshua Brown was not watching Harry Potter behind the wheel, his own psychology may still have conspired against him.

Tesla’s plan assumes that automation advances will eventually get around this problem.

By the way, the National Highway Traffic Safety Administration (NHTSA) already has a 4 level definition of automation.

  • Level 1 “invisible” driver assistance (i.e. antilock brakes with electronic stability control).
  • Level 2 cars with 2+ level 1 systems (i.e. in cruise control, lane centering)
  • Level 3 “Limited Self-Driving Automation” in cars like the Model S, where “the driver is expected to be available for occasional control but with sufficiently comfortable transition time.”
  • Level 4 full self-driving automation

NASA warns that although partial automation is inherently unsafe, it’s also a danger to assume that level 4, full self-driving automation is a logical extension of level 3 (other car makers like google and Ford appear to be trying to reach level 4).

Level 3 is probably unsuitable for cars because the 1-second reaction time is simply too fast, and level 4, based on NASA’s experience is also unlikely.

This entry was posted in Automobiles and tagged , , , . Bookmark the permalink.

Comments are closed.