Do the pros of artificial intelligence outweigh the cons?

Do the pros of artificial intelligence outweigh the cons?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Would you implement an AI system that led millions of people to feel lonelier?

That resulted in massive unemployment?

That perpetuated inequality and racism?

Freedom from boredom versus unemployment

There is no doubt that artificial intelligence has many benefits. It frees people from repetitive, predictable tasks that could be more easily and safely completed by machines and AI. This would allow people to work on tasks that depend on the human brain’s ability to handle ambiguity. However, freeing people from basic and repetitive tasks could mean massive job losses for factory workers, vehicle drivers, retail sales people, restaurant workers, and many more. Some of those people will be able to turn to alternative careers but a large segment of people could be permanently out of work.

Voice assistants like Alexa and Siri have made finding restaurants, setting alarms, streaming music, reading books, answering predictable questions, and many other trivial tasks far easier and quicker – particularly for people whose hands might be otherwise occupied or who are disabled. Those saved minutes could result in more personal or family time. However, they could also result in job losses as the need for personal assistants decreases, and workers are able to accomplish much more in their limited time. The slow death of the secretary that we’ve seen over the past few years will only intensify.

Improved medical diagnoses versus lack of trust and responsibility

AI has proven its worth in the medical arena, sometimes performing better than human doctors. Among many other examples, AI has demonstrated more accurate incision cutting, development of vastly quicker medical treatment plans, and better predictions of heart attacks and strokes. One researcher shares a story where the error rate of human diagnosis was 3.5% compared to 7.5% for the AI, but together the error rate was brought down to 0.5%.

But ethical issues arise here as well. When should medical specialists trust their gut over technologies that are slowly starting to out-perform them? When is the doctor to blame for trusting AI over their own diagnosis, or vice versa, only to find their decision was incorrect? When do we pursue sanctions for incorrect decisions?

Increased impartiality versus perpetuation of bias

AI has the amazing potential to reduce and even eliminate human biases such as racism, sexism, and various forms of demographic and psychographic-based forms of hate. Supporting our legal, educational, and social systems with AI protocols could instantly wipe out unconscious biases of even the most sensitive, ethical people. AI technologies have already been used to set bail resulting in much more equitable outcomes.

At the same time however, we now know that many of the training systems we’ve used to build our AI technologies were founded on highly biased, racist, and sexist data. Some of these supposedly bias-free tools have been used to predict recidivism and later found to be biased against black people. In another case, when Microsoft released Tay, a bot, in 2017 with the intent of watching it learn how to communicate, it learned racist ideas from internet trolls. A sense of ethical behaviour had not been built into the system leading to horrible outcomes. Garbage in, garbage out has never been more true.

Morality is a cultural construct not truth

Despite having positive life changing applications, the negatives mean that AI is not fully trusted. Elon Musk, known for disrupting the automotive market with Tesla, has been accused of fear mongering but he seems to genuinely worry that AI could go rogue and lead to the destruction of humanity. Bill Gates says we shouldn’t panic over AI, but we do need to ensure that AI technologies are implemented properly. And that means ensuring the technologies are ethical.

Unfortunately, there is a serious problem with creating ethical AI technologies. There is no single view of ethics and morality that spans all countries, cultures, religions, and people. Cultural norms, which greatly impact morality, vary widely from country to country and group to group.

What one group firmly believes to be ethical behaviour could easily be seen as by others as biased. Where some groups place more value on the individual, others place higher value on families and teams. Where some groups place more value on economic success, others place higher value on personal health and well-being. Some groups of people believe the death punishment is morally acceptable. Some groups permit some, but not all, people to vote or drive or wear certain clothing. Morality is not fixed.

There is no easy way to ensure that AI technologies are applied in ethical ways. But we can consciously work to recognize potential problems during development and application. We can train ourselves to be aware of when our personal beliefs adversely impact applications, of when groups of people are unnecessarily negatively affected. We may never be able to resolve every problem but it’s something we ought to strive for.

Popular in the Community

Close

What's Hot