Why self-driving cars may not be in your future

Preface. Below are excerpts from several articles about why a completely automated vehicle is unlikely.  Heaven forbid they are invented. Researchers have found that people will drive 76% more miles, stop using bicycles and mass transit, waste a considerable amount of energy, and increase congestion.

In December of 2021 I heard Ralph Nader being interviewed on Science Friday about car safety.  When asked about self-driving, he laughed, and said that was such a fantasy, since it required very detailed, up-to-date filming of all streets, which could throw the software off if changes were made, or if the white and yellow lines on the road weren’t clearly visible, and it would cost tens of billions of dollars to maintain highways to the standards needed for car software.  I can see that California is aware of this and plans to improve 395,000 miles of highway striping (Snibbe 2018).

Self-driving cars in the news:

Metz C (2021) The Costly Pursuit of Self-Driving Cars Continues On. And On. And On. New York Times. Seven years ago Waymo discovered that spring blossoms made its self-driving cars get twitchy on the brakes. So did soap bubbles. And road flares. Matching the competence of human drivers was elusive. The cluttered roads of America, it turned out, were a daunting place for a robot. The wizards of Silicon Valley said people would be commuting to work in self-driving cars by now. Instead, there have been court fights, injuries and deaths, and tens of billions of dollars spent on a frustratingly fickle technology that some researchers say is still years from becoming the industry’s next big thing. Only the deepest-pocketed outfits like Waymo, a subsidiary of Google’s parent company, Alphabet; auto giants; and a handful of start-ups are managing to stay in the game.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Quain, J. R. September 26, 2019. Autonomous Cars are still learning to see. New York Times.

Consensus is growing among designers that self-driving cars just aren’t perceptive enough to make them sufficiently safe.

The problem is that cars can’t begin to figure out what’s around them — separating toddlers from traffic cones, for example — if they can’t see the objects in the first place.  Autonomous cars still can’t see well enough to safely maneuver in heavy traffic or see far enough ahead to handle highway conditions in any kind of weather.

Until now, the standard model for autonomous cars has used some combination of four kinds of sensors — video cameras, radar, ultrasonic sensors and lidar. It’s the approach used by all the leading developers in driverless tech, from Aptiv to Waymo. However, as dozens of companies run trials in states like California, deployment dates have drifted, and it has become increasingly obvious that the standard model may not be enough.

Video cameras can be foiled by glare. Standard radar can judge the relative speed of objects but has Mr. Magoo-like vision. Ultrasonic sensors can sense only nearby objects — and not very clearly. Lidar (formally, light detection and ranging), while able to create 3-D images of people and street signs, has distance limitations and can be stymied in heavy rain. And even the most sophisticated artificial intelligence software can’t help if it doesn’t have the perceptual data to begin with.

My comment: according to the article, other sensing technologies are being developed, but read the articles below before thinking this is an easy technical problem to solve soon.  Just be glad that some of this technology is making conventional cars safer to drive.

Mervis, J. December 15, 2017. Not so fast. We can’t even agree on what autonomous, much less how they will affect our lives. Science.

Human drivers aren’t as unsafe as they’re made out to be.  A fatal crash now occurs once every 3.3 million hours of vehicle travel. It will be hard for an automated system to beat that. The public will be much less accepting of crashes caused by software glitches or malfunctioning hardware rather than human error. “Society now tolerates a significant amount of human error on our roads,” Pratt told a congressional panel earlier this year. “We are, after all, only human.”

While developers amass data on the sensors and algorithms that allow cars to drive themselves, research on the social, economic, and environmental effects of Automated Vehicles (AVs) is sparse. Truly autonomous driving is still decades away according to most transportation experts.

In the dystopian view, driverless cars add to many of the world’s woes. Freed from driving, people rely more heavily on cars—increasing congestion, energy consumption, and pollution. A more productive commute induces people to move farther from their jobs, exacerbating urban sprawl. At the same time, unexpected software glitches lead to repeated recalls, triggering massive travel disruptions. Wealthier consumers buy their own AVs, eschewing fleet vehicles that come with annoying fellow commuters, dirty back seats, and logistical hassles. A new metric of inequality emerges as the world is divided into AV haves and have-nots.

Companies have good reason for painting the rosiest scenario for their technology, Shladover, a transportation engineer at the California Partners for Advanced Transportation Technology program in Richmond says. “Nobody wants to appear to be lagging behind the technology of a competitor because it could hurt sales, their ability to recruit top talent, or even affect their stock price,” he says.

As a result, it’s easy for the public to overestimate the capabilities of existing technology. In a fatal crash involving a Tesla Model S and a semitrailer in May 2016, the driver was using what Tesla describes as the car’s “autopilot” features—essentially an advanced cruise control system that can adjust the car’s speed to sync with other vehicles and keep the car within its lane. That fits the definition of a level-two vehicle, which means the driver is still in charge. But he wasn’t able to react in time when the car failed to detect the semi.

Shladover believes AV companies need to be much clearer about the “operational design” of their vehicles—in other words, the specific set of conditions under which the cars can function without a driver’s assistance. “But most of the time they won’t say, or they don’t even know themselves,” he says.

But progress will likely be anything but steady. Level three, for example, signifies that the car can drive itself under some conditions and will notify drivers when a potential problem arises in enough time, say 15 seconds, to allow the human to regain control. But many engineers believe that such a smooth hand-off is all but impossible because of myriad real-life scenarios, and because humans aren’t very good at refocusing quickly once their minds are elsewhere. So many companies say they plan to skip level three and go directly to level four—vehicles that operate without any human intervention.

Even a level-four car, however, will operate autonomously only under certain conditions, say in good weather during the day, or on a road with controlled access.

Rural communities might need government subsidies to give residents of a sparsely populated area the same access to AVs that their urban neighbors enjoy. And advocates for mass transit, bicycling, and carpooling are likely to demand that AV fleets enhance, rather than compete against, these sustainable forms of transportation.

Pavlus, John. July 18, 2016. What NASA Could Teach Tesla about Autopilot’s Limits. Scientific American.

Decades of research have warned about the human attention span in automated cockpits

After the Tesla’s Model S in auto-pilot mode crashed into a truck and killed its driver, the safety of self-driving cars has been questioned due to 3 factors: the autopilot system didn’t see the truck coming, the driver didn’t notice the truck either, so neither applied the brakes.

Who better knows the dangers than NASA, where automation in cockpits has been studied for decades (i.e. cars, space shuttle, airplane).  They describe how connected a person is to a decision-making process as “in the loop”, which, say driving a car yourself, means Observe, Orient, Decide, Act (OODA).  But if your car is in autopilot but you can still interact with the system to brake or whatever, you are “ON the loop”.

Airplanes fly automated, with pilots observing.  But this is very different from a car.  If something goes wrong the pilot has many minutes to react. The plane is 8 miles in  the air.

But in a car, you have just ONE SECOND.  That requires a faster reflex reaction time than a test pilot. There’s almost no margin for error.  This means you might as well be driving manually since you still have to be paying full attention when the car is on autopilot, not sitting in the back seat reading a book.

Tesla tries to get around this by having the autopilot make sure the driver’s hands are on the wheel and visual and audible alerts are triggered if not.

But NASA has found this doesn’t work because the better the auto-pilot is, the less attention the driver pays to what’s going on.  It is tiring, and boring, to monitor a process that does well for a long time, and was called a “vigilance decrement” as far back as 1948. Experiments back then showed that after just 15 minutes vigilance drops off.

So the better the system the more we’re likely to stop paying attention.  But no one would want to buy a self-driving car that they may as well be driving. The whole point is that dangerous stuff we’re already doing now like changing the radio, eating, and talking on the phone would be less dangerous in autopilot mode.

These findings expose a contradiction in systems like Tesla’s Autopilot. The better they work, the more they may encourage us to zone out—but in order to ensure their safe operation they require continuous attention. Even if Joshua Brown was not watching Harry Potter behind the wheel, his own psychology may still have conspired against him.

Tesla’s plan assumes that automation advances will eventually get around this problem.

Transportation experts have set up 6 levels of automation.

What the car does at each of the 6 levels:

  • 0: nothing
  • 1: accelerates, brakes, OR steers
  • 2: accelerates, brakes, AND steers
  • 3: assumes full control within narrow parameters, such as when driving on the freeway, but not during merges or exit.
  • 4: Everything, only under certain conditions (e.g. specific locations, speed, weather, time of day)
  • 5: everything: goes everywhere, any time, and under all conditions

What the driver does:

  • 0: Everything
  • 1: Everything but with some assistance
  • 2: remains in control, monitors and reacts to conditions
  • 3: must be capable of regaining control within 10-15 seconds
  • 4: Nothing under certain conditions, but everything at other times
  • 5: Nothing, and unable to assume control

Our take on the prospects:

  • 0: older fleet
  • 1: present fleet
  • 2: Now in testing
  • 3: might never be devloped
  • 4: where the industry wants to be
  • 5: Never

Computers do not deal well with anything unexpected, with sudden and unforeseen events.   Self-driving cars can obey the rules of the road, but they cannot anticipate how other car drivers will behave. Without super-accurate GPS automation relies on seeing lines on the pavement to keep in their lane, but snow, rain, and fog can make them go away. Self-driving cars rely on special detailed maps of the location of intersections, on-ramps, stop signs and so on. very few roads are mapped to this degree, or updated with construction, detours, conversions to roundabouts, new stop lights, and so on.  They don’t detect potholes, puddles, or oil spots well and can be confounded by the shadows of overpasses.  If a collision is unavoidable, do you run over the child or swerve into a light pole and kill the driver potentially? (Boudette 2016).

John Markoff. January 17, 2016. For Now, Self-Driving Cars Still Need Humans. New York Times.

Self-driving cars will require human supervision. On many occasions, the cars will tell their human drivers, “Here, you take the wheel,” when they encounter complex driving situations or emergencies.  In the automotive industry, this is referred to as the hand-off problem, and automotive engineers say there is no easy solution to make a driver who may be distracted by texting, reading email or watching a movie perk up and retake control of the car in the fraction of a second that is required in an emergency. The danger is that by inducing human drivers to pay even less attention to driving, the safety technology may be creating new hazards. The ability to know if the driver is ready, and if you’re giving them enough notice to hand off, is a really tricky question.

The Tesla performed well in freeway driving, but on city streets and country roads, Autopilot performance could be described as hair-raising. The car, which uses only a camera to track the roadway by identifying lane markers, did not follow the curves smoothly or slow down when approaching turns.  On a 220-mile drive to Lake Tahoe from Palo Alto, Calif., Dr. Thrun said he had to intervene more than a dozen times.

Like the Tesla, the new autonomous Nissan models will require human oversight and even their most advanced models aren’t autonomous in  snow, heavy rain and some nighttime driving.

You could propose various fixes, but none of them get around the 1 second time for the driver to react. That is not fixable.

Massachusetts Institute of Technology, CSAIL. 2018. Self-driving cars for country roads: Most autonomous vehicles require intricate hand-labeled maps, but new system enables navigation with just GPS and sensors. ScienceDaily.

Uber’s recent self-driving car fatality underscores the fact that the technology is still not ready for widespread adoption. One reason is that there aren’t many places where self-driving cars can actually drive. Companies like Google only test their fleets in major cities where they’ve spent countless hours meticulously labeling the exact 3D positions of lanes, curbs, off-ramps and stop signs.

Indeed, if you live along the millions of miles of U.S. roads that are unpaved, unlit or unreliably marked, you’re out of luck. Such streets are often much more complicated to map, and get a lot less traffic, so companies are unlikely to develop 3D maps for them anytime soon. From California’s Mojave Desert to Vermont’s White Mountains, there are huge swaths of America that self-driving cars simply aren’t ready for.

Additional references

Boudette, N. June 4, 2016. 5 Things That Give Self-Driving Cars Headaches. New York Times.

Snibbe K (2018) New Road Striping in California Meant to Help Self-Driving Vehicles. The Orange County Register. https://www.govtech.com/fs/new-road-striping-in-california-meant-to-help-self-driving-vehicles.html

Really great short story about self driving cars: T. C. Boyle. 2019. Asleep at the Wheel. New Yorker.

This entry was posted in Artificial Intelligence, Automobiles and tagged , , , , , , , . Bookmark the permalink.

One Response to Why self-driving cars may not be in your future