Skip to main content
Dismiss
InnovationQ and the IP.com Prior Art Database will be updated on Sunday, October 3, from 11am-1pm ET. You may experience brief service interruptions during that time.
Automation

Do Autonomous Vehicles Make Roads Safer?

By August 23, 2021No Comments

According to the Department of Transportation, 94% of car accidents are due to human error. With this statistic in mind, it’s not hard to believe autonomous vehicle manufacturers’ declarations of safety. Tesla claims it makes “the safest cars in the world.” Waymo’s safety report states the company is “building a safer driver for everyone.” However, automated vehicles are in accidents too. The National Highway Traffic Safety Administration (NHTSA) is currently investigating more than 20 crashes involving Teslas. If automated cars are supposed to make our roads safer, how did these accidents happen?

Today’s Semi-Autonomous Vehicles

Widely available autonomous vehicles aren’t really self-driving, despite what their manufacturers might say. The Society of Automotive Engineers (SAE) defines automated vehicles using a six-level standard, ranging from no automation to full automation. Many widely available automations are really not very autonomous at all, such as lane keeping (Level 1) and adaptive cruise control (Level 2). Even systems touted as self-driving, including Tesla’s Autopilot and Full Self-Driving modes, are currently only at Level 2 (partial automation) on the NHTSA’s scale. Right now, highways are experiencing a combination of human and machine control, which could in some situations be described as augmented intelligence

Autonomous Vehicle Accidents

We know that augmented intelligence works best when humans are responsible for the tasks that require things like common sense and AI takes care of more straightforward tasks. This helps explain why technology like Tesla’s Autopilot is more useful when driving on highways than city streets. It also begins to explain why semi-autonomous cars are getting in accidents. 

Most new cars have at least some automated features, which range from self-parking abilities and blind-spot monitoring to more robust adaptive cruise control. These features by no means make a car self-driving, but drivers may not understand the difference between today’s technology and what’s still science fiction. Manufacturers’ nomenclature and marketing don’t help. As Jason Levine, executive director of the Center for Auto Safety, told The New York Times regarding Tesla specifically, “Autopilot suggests the car can drive itself and, more importantly, stop itself. And they doubled down with Full Self-Driving, and again that leads consumers to believe the vehicle is capable of doing things it is not capable of doing.” 

Because human drivers don’t understand the limitations of driver assistance and partial automation, they tend toward “automation complacency.” This tendency to over-rely on AI and automation could explain the number of accidents involving systems like Tesla’s Autopilot. It could also damage the reputation of fully autonomous vehicles and slow down widespread adoption when such technologies are commercially available. 

Humans are already more likely to blame an automated system than a human driver for an accident involving an autonomous vehicle, further complicating the situation. This phenomenon, referred to as blame attribution asymmetry, explains why the public blames these types of accidents on automations, even if it was human error that ultimately caused the crash. Considered together, automation complacency and an attribution bias may help explain why automated cars seem so unsafe.