Last weekend, it was reported that Uber’s self driving vehicle had struck and killed someone. Police in Arizona confirmed that a woman identified as Elaine Herzberg was crossing a street with her bicycle when a self-driving Uber car had crashed into her. Police later identified that the car was traveling at 40 miles per hour. Uber has responded swiftly by suspending its self-driving vehicle program until further actions. Despite being in an autonomous mode when the crash happened, the vehicle was reported to have had a human safety driver behind the wheel to monitor  and retake control in case of an emergency. The crash has prompted different responses and will most likely affect the self-driving vehicle sector, which over the years has seen billions spent on research and development for a technology that can promise more efficient modes of transport.

Self Driving Vehicle

2“Safety” and “Progress”: A Bad Will Smith Dystopian Movie

The incident has raised many pressing questions. How safe is it to test driverless vehicles on public roads? Crashes are very unpredictable, and unless the human behind the wheel is drunk, updating their snapchat or just very tired l, then you just can’t really predict how and when a crash may occur. About 40,000 people became casualties on American roads in 2017, and about 6,000 of them were pedestrians. According to statistics, human drivers kill about 1.16 people for every 100 million miles driven on roads. Self-autonomous vehicles are not even close to having that sort of distance, but they’ve already killed one.

Self Driving Vehicle
So what now? the words “safety” and “progress” will be thrown around quite a lot. Developers insist that their employees are perfectly trained to handle situations when needed by taking over from the self-driving software implanted in the vehicle. However, researchers continue to speculate about how imperfect that sort of technique can be when dealing with such a delicate software. The Arizona incident may have just shown the flaw in that system – a little glimpse as to what can go wrong as we step further into an AI influenced future.

1The “Trolley Problem”

Self Driving Car

According to the Ryan Calo, a researcher looking into the implications of self-driving vehicles at the University of Washington, the ethical questions about self-driving vehicles can be linked to the “trolley problem” which look into the issue that the vehicle will need to choose between two potential victims in case of an incoming accident. The “trolley problem” is a thought experiment in the fields of ethics. You picture a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 5 people tied up and they are unable to make a move. The trolley is directing towards them. You stand away from some distance and next to you is a lever. If you pull this lever, the trolley switches to a different set of tracks. However, you see that there is 1 other person tied up on that track and you presented with a dilemma:

1. Do nothing, and the trolley kills 5 people on the main track.
2. Pull the lever, diverting the trolley to the other track, but kill that 1 person.
Which is the more ethical choice?

Calo says the accident that killed the woman “may have just been prompted by the sensors not being able to pick her up”, or that the algorithm “didn’t understand what it’s seeing”. Calo also adds that those creating AI-based vehicles should really think about the potential dangers on human lives. A professor at the Arizona State University, Subbarao Kambhampati, who is a specialist in AI, claims that the incident raises serious questions about the ability of safety drivers in really monitoring systems, especially after long hours of testing and learning. Other researchers aim to look at the challenge of creating efficient communications between self-driving vehicles and pedestrians.