Teaching a Stone to Play Chess: The Potential Multi-Faceted Dangers of Artificial Intelligence

Teaching a Stone to Play Chess: The Potential Multi-Faceted Dangers of Artificial Intelligence
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

During a recent dinner with my son who is currently studying for his Master's Degree in Computer Science at University College London, we discussed comments that readers had left on my previous HuffPost blog about self-driving cars.

In this blog entry, I discussed the choice of the right optimization function to govern the decision that the car's "brain" will have to make on its owner's behalf when faced with several scenarios of imminent danger. Each would result in radically different outcomes. Depending on how the car's "brain" had been coded, the car could optimize to save its passengers, to minimize overall casualties, or minimize cost to society. In the post I also questioned what would happen if, at some point, AI brains decide to change the optimization function towards one where the goal is no longer to protect human lives but foster their own power?

Since that blog entry and the dinner with my son, I have had a couple of interesting discussions with friends about the role of artificial intelligence in our society. Interestingly, two different perspectives came out of those interactions. The pessimists saw the emergence of the triumph of so-called "artificial beings" as inevitable. The optimists dove into the complexity of human beings to draw very different conclusions.

As early as the 1940s, Alan Turing, the brilliant British mathematician developed "the imitation game", a test designed to distinguish whether two interacting parties are human or machines -- now known as the Turing Test. More recently, the victory of the Google team over a professional "Go" player provides yet another platform by which to debate this issue. Will there really come a day when it will be impossible to distinguish between humans and machines? Will we go beyond talking about AI and refer to artificial beings?

Some experts believe that by 2050, some artificial "beings" will have an IQ of more than one million. Given that the average IQ for many western European nations is 100, this is 10,000 times more than a human. By then, there will be more difference in intelligence between artificial beings and human beings than between an insect and a human. "For an artificial being, interacting with a human will feel like teaching a stone to play chess," said a friend to me.

From a computational perspective, computers will certainly be able to outthink humans, but what does it really mean to think? Is it inevitable that machines will eventually be able to think as humans do? Or will the human conscience and our capacity for emotion and reason always make the difference?

Currently most artificial beings are operated by private companies. According to some, artificial beings will inevitably escape from their corporate parents and break free. They will then take control of the financial system, the power grid, the military arsenal, and transportation hubs, relegating humans to clueless spectators or even worse to "batteries" as predicted in the Matrix movie many years ago. I must admit that this discussion shook me a bit and I looked for the opinion of others.

"Imagine that you are a computer in a restaurant experiencing this discussion through your sensors," said my friend, a genius developer, "reading the faces, getting a sense of the surroundings, smelling the food and drinks that we are consuming, making sense of the terms that we use, doing sentiment analysis to infer the nuances of what was exchanged. We would not know what to do because that discussion has no obvious outcome that we could train our computer model against," he explained.

In a case of the Google team that beat the Go champ, even if you have an enormous amount of data, there is one optimization function: winning the game. In a discussion like the one we were enjoying about the potential dangers of AI, there is no winning, or is there? My other friend then pointed at another limitation of AI. According to him, AI will very quickly reach its glass ceiling because of the level of interaction between sensors and the algorithms. Look at the definition of walking in the dictionary: "move at a regular pace by lifting and setting down each foot in turn, never having both feet off the ground at once..." which relies on people having seen, or experienced either a living animal or human being walking. This definition would not be understandable to a machine.

AI's impact on how scientific research is conducted has yet to be determined. Certainly researchers use computers heavily in their work to analyze data and, in some cases, create virtual scenarios to test various hypotheses before time and money are invested in practical experiments. A paper in Discovering Causal Structure published in 1987 about The Problems of Science without Experiments cites the importance of recognizing the difference between practice and principle. Currently only humans have the capability to interact and draw inspiration from co-workers, friends and family who can contribute to the process through emotion, empathy and their own experiences. All valuable assets that make up the human condition and are vital in the research process.

As a business executive I am the first to advocate for innovation, it is the basis for our existence. But it is worth taking a moment for those of us in science and technology to ask ourselves how much are we willing to let AI be part of our lives, and at what point should we exercise caution and put some guidelines in place?

Popular in the Community

Close

What's Hot