The Myth Of Moral Ambiguity In Autonomous Cars

When most vehicle manufacturers expect autonomous cars to be on the market by 2019 ― with more than 10 million on roads in 2020 ― one would expect people to start considering the various issues driverless cars create.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2016-08-04-1470336740-4285115-DriverlessCars.jpg
Image source: PhotoDune

When most vehicle manufacturers expect autonomous cars to be on the market by 2019 ― with more than 10 million on roads in 2020 ― one would expect people to start considering the various issues driverless cars create. For example, there are thousands of legal considerations to make, especially because different states or countries might enact different regulations on autonomous vehicles, preventing their owners from traveling from one place to another.

Yet, most people find these practical concerns incredibly dull. A much more exciting topic in regards to driverless cars is, surprisingly, ethics. Thanks to articles in the Atlantic (and elsewhere), giant swaths of the population have begun to question whether we can trust robot cars to make correct ethical decisions. What most people don't realize, however, is that these conversations are absolutely meaningless.

A Brief Discussion of Robots

Almost since their inception (in a 1920 Czech play) robots have incited discussions about ethics. Especially as science and tech inches ever closer to humanoid machines and artificial intelligence, people are becoming more concerned about the management of robots in all their forms. As we continue to establish egalitarian order in our society, discussions about robots are becoming more and more prevalent.

In simple terms, robots are not people, but by adding human-like traits to robots, such as voices, humanoid shape, or facial features, causes most humans to see the machines almost as equals. A handful of organizations have developed in support of robot rights; conversely, as robots continue to hijack vast numbers of jobs, other people are developing feelings of antipathy toward unconscious technology.

The Most Popular Ethical Issue with Driverless Cars

However, these ethical concerns are not the most popular discussions about autonomous cars. More commonly, non-academic writers and readers prefer to examine what is commonly called "The Trolley Problem." Undoubtedly, you've heard of it before:

A runaway trolley is careening down the street, heading toward a group of pedestrians who are milling about on the tracks. The trolley's conductor is limited in her ability to mitigate the damage; she can allow the trolley to continue on its course, hitting the crowd and causing injury to many people, or she can flip a switch, directing the trolley down an alternate route where the tracks are blocked by a lone child, who will certainly die from the collision.

There are dozens of variations of the trolley problem. In one, it is not the conductor's dilemma, but a bystander's, who has the opportunity to sacrifice one person for the sake of others. In another tale, the trolley conductor has the opportunity to derail the trolley, harming the people inside rather than anyone outside.

We only need to tweak the typical trolley problem to apply it to driverless cars:

"A driverless car is moving at speed on a crowded highway. An obstacle appears in the car's lane, forcing it to make a decision: It can swerve right to collide with an SUV, swerve left to hit a motorcyclist, or continue straight and certainly crash into the obstacle."

In each instance, the driverless car causes harm, but it is the robot car who decides the recipient of the harm. Again, there are several variations, some much more similar to the original trolley problem, but each raises a significant ethical question: Can we trust a car to make the right choice?

The Real Ethical Problems Driverless Cars Face

Most of us have driven cars for most of our lives, amassing thousands of miles on the road, and few (if any) have ever experienced such an unbearable ethical choice. Even putting aside the unlikelihood of such a scenario, we must remember that a human driver would never experience the opportunity to make a decision; rather, with only fractions of a second, human drivers would unconsciously react with little regard for costs or benefits or moral justice. If it isn't an ethical decision for a driver, it can hardly be an ethical decision for a robot car.

In actuality, no machines have the power to make decisions ever, regardless of time frame. Robots function because they have to, and the "decisions" they make are only protocols written by their programmers. Therefore, a more pressing ethical question when it comes to driverless cars is that of blame: If an autonomous vehicle malfunctions, who is responsible for the harm it causes? It could be the owner, manufacturer, or programmer.

In a traditional car collision, the drivers are under investigation, unless the collision is the result of machine malfunction, like Ford's potential brake failures, in which case the manufacturer is at fault. Yet in a driverless car, a collision should only occur when it cannot be avoided ― in which case, is it the liability of the programmer for not foreseeing such a situation?

Bottom Line:

These ethical problems are more pressing because they could stall the release of driverless cars onto the market. Before we can put autonomous vehicles on the road, manufacturers, insurers, and legislators must make a decision ― and not one that has anything to do with a trolley.

Popular in the Community

Close

What's Hot