Skip main navigation

Who’s to blame?

A case study of an accident involving a car in self-driving mode.
Picture of a taxi
© University of York

Content Warning: At the beginning of this article, there is a short factual description of a fatal accident involving an autonomous vehicle. The remainder of the article explores contributory factors in the accident. There are no pictures, and the discussion is kept factual throughout. If you are likely to find this distressing, you might want to move on to the next step.

Accidents can result in serious harm to the environment, or death or serious injury to humans. It is a natural part of our response to such incidents to want to find someone – or something – to blame. As the following real-life case study demonstrates, in accidents involving self-driving cars it is sometimes hard to say who is to blame.

At 22:00 on March 18th 2018, Elaine Hertzog was pushing her bicycle across a four-lane road in Tempe, in the US state of Arizona. She was hit by an Uber test vehicle travelling at 43 mph. The vehicle was operating in self-driving mode, and Hertzog died from her injuries shortly afterwards.

Human error?

After the accident, a lot of attention fell on the humans involved. The local police noted that Hertzog had chosen not to use a pedestrian crossing, but had “come out of the shadows right into the roadway”. Investigators later found that she had controlled drugs in her system, though it was unclear whether they would have impaired her judgement. The accident report concluded that, regardless of her actions, the vehicle had a legal and moral duty to avoid her, and any other hazards in the roadway.

The vehicle had been in self-driving mode for around 20 minutes before the accident. However, this does not mean that the car was empty: by law, a human “safety driver” had to be in the driving seat. Their role was to monitor the vehicle and the road, and to be prepared to take over control if required. Camera footage revealed that the safety driver was looking down for almost 6 seconds before the crash – she looked up about half a second before the crash, seized the wheel and swerved slightly but did not have time to brake. In the car’s 20 minutes in self-driving mode before the accident, the camera captured the driver repeatedly looking down – overall, for a total of almost 7 minutes. Mobile phone data showed that she had been streaming a TV programme. Although some companies required two engineers to be in the car’s cabin during testing, Uber mandated only one safety driver and did not require any specific training: the only required qualification was a driving licence. The safety driver was also not expected to monitor feedback messages from the computer.

Technological error?

Although the Uber incident raises some important issues about the responsibilities of pedestrians and drivers, human error alone does not explain it. The investigation raised some important points about the technology used in the vehicle. Hertzog was wearing dark clothing and pushing a bicycle. The car detected her as an object in the road six seconds before the collision, but was unable to classify the object: it wavered between a human and a bicycle and indicated a need to brake only 1.3 seconds before the crash. At 43 mph, the safe braking time would have been around 1.8 seconds. The investigation found that the software “did not include consideration for jaywalking pedestrians”. Even though the need for emergency braking was flagged, the system failed to brake at all. Although there was an automated braking system built into the SUV, this system had proved quite erratic in previous tests, and Uber had disabled it while the vehicle was in self-driving mode. The system was not designed to alert the safety driver, and was not equipped to make an emergency stop on its own.

Environment

The investigation also raised concerns about the design of the road. The street lighting made it difficult for the vehicle – or a human driver – to detect a pedestrian moving at the side of the road. Although Hertzog did not cross at a designated crossing point, there is a brick-paved path between the carriageways at the point of the accident. The inquiry heard that people often crossed the road at this point, despite signs warning them not to do so.

Governance

Regulations relating to the use of autonomous driving technologies on public roads in Arizona were more liberal than in other states. Oversight from the authorities was minimal, and there was no requirement for a permit or limitations on when testing could happen. In December 2016, the State Governor made a public announcement celebrating having persuaded Uber to shift their test operations from California: “Arizona welcomes Uber self-driving cars with open arms and wide open roads. While California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology.” Uber had in fact been testing their self-driving vehicles in Arizona for several months before this without the public being aware.

© University of York
This article is from the free online

Intelligent Systems: An Introduction to Deep Learning and Autonomous Systems

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now