When most vehicle manufacturers expect autonomous cars to be on the market by 2019 ― with more than 10 million on roads in 2020 ― one would expect people to start considering the various issues driverless cars create. For example, there are thousands of legal considerations to make, especially because different states or countries might enact different regulations on autonomous vehicles, preventing their owners from traveling from one place to another.
Yet, most people find these practical concerns incredibly dull. A much more exciting topic in regards to driverless cars is, surprisingly, ethics. Thanks to articles in the Atlantic (and elsewhere), giant swaths of the population have begun to question whether we can trust robot cars to make correct ethical decisions. What most people don't realize, however, is that these conversations are absolutely meaningless.
A Brief Discussion of Robots
In simple terms, robots are not people, but by adding human-like traits to robots, such as voices, humanoid shape, or facial features, causes most humans to see the machines almost as equals. A handful of organizations have developed in support of robot rights; conversely, as robots continue to hijack vast numbers of jobs, other people are developing feelings of antipathy toward unconscious technology.
The Most Popular Ethical Issue with Driverless Cars
A runaway trolley is careening down the street, heading toward a group of pedestrians who are milling about on the tracks. The trolley's conductor is limited in her ability to mitigate the damage; she can allow the trolley to continue on its course, hitting the crowd and causing injury to many people, or she can flip a switch, directing the trolley down an alternate route where the tracks are blocked by a lone child, who will certainly die from the collision.
There are dozens of variations of the trolley problem. In one, it is not the conductor's dilemma, but a bystander's, who has the opportunity to sacrifice one person for the sake of others. In another tale, the trolley conductor has the opportunity to derail the trolley, harming the people inside rather than anyone outside.
We only need to tweak the typical trolley problem to apply it to driverless cars:
"A driverless car is moving at speed on a crowded highway. An obstacle appears in the car's lane, forcing it to make a decision: It can swerve right to collide with an SUV, swerve left to hit a motorcyclist, or continue straight and certainly crash into the obstacle."
In each instance, the driverless car causes harm, but it is the robot car who decides the recipient of the harm. Again, there are several variations, some much more similar to the original trolley problem, but each raises a significant ethical question: Can we trust a car to make the right choice?
The Real Ethical Problems Driverless Cars Face
In actuality, no machines have the power to make decisions ever, regardless of time frame. Robots function because they have to, and the "decisions" they make are only protocols written by their programmers. Therefore, a more pressing ethical question when it comes to driverless cars is that of blame: If an autonomous vehicle malfunctions, who is responsible for the harm it causes? It could be the owner, manufacturer, or programmer.
In a traditional car collision, the drivers are under investigation, unless the collision is the result of machine malfunction, like Ford's potential brake failures, in which case the manufacturer is at fault. Yet in a driverless car, a collision should only occur when it cannot be avoided ― in which case, is it the liability of the programmer for not foreseeing such a situation?
These ethical problems are more pressing because they could stall the release of driverless cars onto the market. Before we can put autonomous vehicles on the road, manufacturers, insurers, and legislators must make a decision ― and not one that has anything to do with a trolley.