Jump to Navigation | Jump to Content
American Bar Association

 

Pump the Brakes: Driverless Cars and Assignment of Fault

By Bojan Manojlovic – August 10, 2016


New automobile technology has become a great assistant. It gives us an extra set of eyes as we back out of the driveway, keeps us in our lanes if we happen to swerve, and warns us of other cars in our blind spots. In an increasingly automated world, it is no surprise that the prospect of completely driverless cars is eagerly awaited. Instead of mindlessly sitting in traffic, we could be reading the newspaper or preparing for an 8 a.m. meeting; parents dream of not having to rush out of work to pick up the kids from karate practice; and the elderly envision the freedom to go to the grocery store without having to depend on others. While there are many benefits of driverless cars, there are still risks, as in the recent case of the Tesla semi-autonomous car that allegedly failed to brake and led to the driver’s death.

 

Driverless cars use supercomputers to function. For instance, GPS is built into driverless cars and allows the car to determine the route without manual assistance from the driver. Because driverless cars will need to operate in a dynamic world, a second system of radars, cameras, lasers, and sensors supplement GPS data with real-time information, such as presence of other objects or road conditions. A third system translates GPS data, data from radar, and other data into actions like steering, turning, accelerating, braking, while making thousands of other decisions per minute. John Patrick Pullen, “You Asked: How Do Driverless Cars Work?,” Time, Feb. 24, 2015. The supercomputer is, by design, meant to be perfect. It will obey all rules, adapt to stimuli quickly, and make only rational decisions. Sounds perfect, right?

 

Consider the following hypothetical, however. Imagine that a driverless school bus, equipped with a perfectly rational, rule-abiding supercomputer, has just picked up 27 kindergartners in southern California. The bus stops at every light and stop sign, turns perfectly, and never speeds. The bus is cruising along Pacific Coast Highway, with a cliff on its right side and nothing but a plunge into the sea on its left. Up ahead, a group of five pedestrians carelessly attempts to run across the highway so they can get to the beach more quickly. Traveling too fast and being too close to brake, the bus is faced with the choice of running over the five pedestrians or running off the cliff into the sea with 27 kids onboard. The bus makes a perfectly logical, utilitarian choice to run over the pedestrians, killing all of them but sparing every child onboard. Did the supercomputer make the correct decision?

 

A recently released study revealed that most people faced with a situation similar to the school bus quandary would want the car to save the children at the expense of the pedestrians. (That is, of course, unless they were the pedestrians.) Peter Dizikes, “Driverless Cars: Who Gets Protected?,” MIT News, June 23, 2016.

 

The pedestrian dilemma prompts serious questions: How do we assign blame when there is no perfect decision? Is this just “an accident” or will someone be determined to be at fault? Are the bus and the supercomputer defectively designed because they chose children over pedestrians? To whom did the supercomputers owe a duty of care? Does the choice made by the supercomputer make it unreasonably dangerous technology?

 

Before we get too excited about the prospect of napping on our way home from work, we need to think about whether our current legal framework is ready for the challenges and pressures of a new technology that does not fit neatly into the mold. Some laws may be easy enough to change. For example, we can license car operators instead of car drivers, an easy fix. Products liability law, such as negligent and defective design, may be more difficult to adapt, because it is rooted in common-law notions of tort law, which require a duty, breach, causation, and damages. While some of these elements may be easier to establish—namely causation and damages—the existence of a duty and a consequent breach may be less than clear. Does a duty to the pedestrians run from the car manufacturer, the producers of the GPS system, the radars, sensors, cameras, and all other components that help the car make its decisions? Such potential liability can stifle financing, research, and development of this new, and potentially much safer, technology. And yet, if the car could have protected the pedestrians and the passengers, but did not, is that simply a plain old negligence claim based on a breach of a duty?

 

While driverless car technology may be right around the corner, it is still too early to tell how the law will adapt to deal with the unique situations it presents.

 

Keywords: litigation, products liability, driverless car, supercomputer, tort law, liability

 

Bojan Manojlovic is a summer associate with Sidley Austin LLP’s Chicago, Illinois, office.


 
Copyright © 2017, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).