Are self-driving cars safe? Highway regulator orders industry to cough up the data

Credit: AP

Credit: AP

After years of inaction, the federal government will begin collecting crash data on automated vehicles.

The National Highway Traffic and Safety Administration on Tuesday ordered dozens of car, truck and technology companies to inform the agency of a serious crash within a day of learning about it, with a more complete data report due after ten days.

The order will enable NHTSA to “collect information necessary for the agency to play its role in keeping Americans safe on the roadways, even as the technology deployed on the nation’s roads continues to evolve,” the agency said.

The order applies to highly automated vehicles including robotic cars that don’t require a human driver, as well as partially automated systems such as Tesla’s Autopilot and General Motors’ SuperCruise with advanced cruise control and automatic steering.

It immediately affects the partially automated so-called Level 2 systems increasingly common on new vehicles from most major manufacturers. The number of fully robotic cars and trucks now deployed on public roads is tiny, but the market is expected to grow dramatically in coming years.

Manufacturers tout the safety and convenience of automated vehicles, but little useful data have been collected to demonstrate how safe they are.

“This is very important, it’s fantastic, and it’s about time,” said Alain Kornhauser, who heads the automated vehicle engineering program at Princeton University. “Safety should not be a competition. It’s a cooperation.”

Tesla Chief Executive Elon Musk has repeatedly claimed vehicles driven on Autopilot are safer than human drivers, but he keeps the company’s raw data private and his statistical analysis has been challenged by experts in the field. Tesla vehicles have been involved in deadly and spectacular crashes with Autopilot engaged.

NHTSA’s current data collection systems and methods are decades old and focus on the number of fatal crashes per year. “By mandating [automated vehicle] crash reporting, the agency will have access to critical data that will help quickly identify safety issues that could emerge in these automated systems … gathering data will help instill public confidence that the federal government is closely overseeing the safety of automated vehicles,” said Steven Cliff, the agency’s acting administrator.

An incident involving an automated driving system that was operating within 30 seconds of a crash must be reported if anyone is sent to a hospital; if a vehicle is towed away; if an air bag is deployed; or if a pedestrian, bicyclist or the like is involved.

The collected data will help everyone from lawyers to regulators to technology companies to the general public understand safety issues as automated vehicles increasingly appear on public roads.

“You can’t start solving a problem until you understand it,” said Bryant Walker Smith, an expert on automated vehicle law at the University of South Carolina. “It’s very encouraging that NHTSA has not waited for the next big crash but is putting in a structure to understand at a much more continuous level what’s going on.”

Indeed, the order itself says that “identification of safety defects does not and should not wait for injuries or deaths to occur.” It lists sensors and software algorithms among the technologies that require some level of oversight.

Smith complimented NHTSA for what he sees as a “generally well written” order. But lawyers undoubtedly will be studying it for loopholes. California, for example, requires companies testing automated vehicles on public roads to report accidents and other issues to Department of Motor Vehicles. But Tesla, which uses customers to test automated technologies, does not comply. “What California teaches us is that companies know how to game the rules,” Smith said.

Robot cars, for all their potential benefits, present new risks as they are deployed, the order notes, and systems such as Autopilot “present safety risks” not only to vehicle occupants but other roadway users. In part, the order says, that’s “due to the unconventional division of responsibility between the vehicle and the human driver.”

YouTube is populated with videos showing Tesla drivers misusing its Autopilot system, and Musk himself has shown off how the car can be driven with no hands, even though the company warns drivers to keep hands on the wheel, making the no-hands driver liable in a crash.

The order points out that courts have determined that a manufacturer might be held liable for “forseeable risk” that NHTSA could deem a safety defect.

But how automakers and tech companies will know about crashes involving automated systems is an open question, according to Smith. Companies such as Waymo and Argo AI are in constant digital communications with their robot cars and will automatically know when a crash happens. Tesla monitors its partially automated vehicles with data-storage chips and over-the-air software.

Manufacturers with partially automated systems but no monitoring capabilities can’t know immediately that a crash has occurred, Smith said. The NHTSA order does note knowledge of a crash includes “information you have received from any internal or external source and in any form,” including, presumably, news stories.

The order states that the reported crash data will be available to the public, although companies can make a confidentiality claim for certain details, such as which version of its automated system was in use.

“Nobody should push back on this,” Princeton’s Kornhauser said. “We don’t know what we don’t know, we don’t know what works and doesn’t work, and this allows us to begin to know that.”

About the Author