Artificial Intelligence Risk - 12 Researchers Weigh in on the Danger's of Smarter Machines

Artificial Intelligence Risk - 12 Researchers Weigh in on the Danger's of Smarter Machines
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Artificial intelligence (AI), once the seeming red-headed stepchild of the scientific community, has come a long way in the past two decades. Most of us have reconciled with the fact that we can't live without our smartphones and Siri, and AI's seemingly omnipotent nature has infiltrated the nearest and farthest corners of our lives, from robo-advisors on Wall Street and crime-spotting security cameras, to big data analysis by Google's BigQuery and Watson's entry into diagnostics in the medical field.

In many unforeseen ways, AI is helping to improve and make our lives more efficient, though the reverse degeneration of human economic and cultural structures is also a potential reality. The Future of Life Institute's tagline sums it up in succinct fashion: "Technology is giving life the potential to flourish like never before...or to self-destruct." Humans are the creators, but will we always have control of our revolutionary inventions?

To much of the general public, AI is AI is AI, but this is only part truth. Today, there are two primary strands of AI development - ANI (Artificial Narrow Intelligence) and AGI (Artificial General Intelligence). ANI is often termed "weak AI" and is "the expert" of the pair, using its intelligence to perform specific functions. Most of the technology with which we surround ourselves (including Siri) falls into the ANI bucket. AGI is the next generation of ANI, and it's the type of AI behind dreams of building a machine that achieves human levels of consciousness.

There is heated debate around whether or not machines will ever become conscious or sentient, at least in the same way as humans experience such a phenomenon. In a connected world where sharing one's own opinion is cheap and fast, one is bound to come across predictions that range from 10 years to 1000 years to never at all. But how do we know which opinions are grounded in research and experience, and which are simply echoed from the many collected voices of the web, akin to a classic game of telephone?)

I decided to hedge bets and go into the trenches of the AI world, to interview 12 leading experts and researchers active in the field, and emerge with informed thoughts and perspectives on the possible futures of AGI and also the risks involved. While their answers seemed to vary more widely on related risks, there seemed to be a general consensus that the creation of a machine consciousness is possible.

Dr. Helgi Helgason, vice president of Operational Intelligence at Activity Stream, believes that "since human intelligence (and consciousness) occurs in nature, it must be a process emerging from physics and chemistry, and I see no theoretical reason that would prevent us from eventually reproducing it in man-made systems if we so desired." The thread of his statement was a common one, echoed in Dr. Andras Kornai's claim that "such things are possible to build from protein; it is evident that no magic will be required."

While Kornai, a professor at the Budapest Institute of Technology, eschews "magic", more than one expert notes that we still do not know how consciousness arises in humans, which seems to pose a real and present hurdle to the development of AGI. As Founder of Skeptic Magazine Dr. Michael Shermer sees it, "I am skeptical that this can be done any time in the near future because of the complexity of the human brain and our lack of understanding of how consciousness arises from neurons communicating via electro-chemical processes, but in the long term it will be done."

University of Sheffield's Dr. Noel Sharkey believes that because we don't have the facts around how consciousness arises, we don't know yet if it is possible to produce such a thing in a machine. "This question is not possible to answer because consciousness is still shrouded in mystery with no adequate scientific theory or model. People who talk with certainty about this are delusional. There is nothing in principle to say that it cannot be created on a computer but until we know what it is, we don't know if it can occur outside of living organisms," says Sharkey.

Others, including Utrecht University's Dr. Medhi Dastani and Worchester Polytechnic Institute's Dr. Eduardo Torres Jara, believe that such consciousness can be created, but question if such a machine will experience 'consciousness' in the same way as humans, which at present seems to be the impossible question.

No one can predict the future, but making an informed...Based on this sample of researchers, the greatest concentration was between 2021 and 2060, or next 10 to 50 years. If these predictions turn out to be correct, that's not a whole lot of time to consider the ethical ramifications of what such an AI presence might inflict on society as a whole. Elon Musk is not alone in some of his fears about the dangers of artificial intelligence - Oxford's Nick Bostrom and University of California's Stuart Russell are two of many who hold this stance - but there's disagreement around near-term AI threats.

Of the 12 researchers interviewed, three threads rise from the data. The majority of the experts interviewed are concerned about the financial and economic harm that already exist but that may be exacerbated to extremes without a conscious plan to move forward, resulting in a range of risks from bigger gaps in wealth distribution to negative environmental effects, like pollution and resource exhaustion.

"The risks brought about by near-term AI may turn out to be the same risks that are already inherent in our society. Automation through AI will increase productivity, but won't improve our living conditions if we don't move away from a labor/wage-based economy," says Cognitive Scientist Dr. Joscha Bach. Dr. Helgason argues that this is more of a certainty than a risk, and that suchs risk should already be factored into education policies.

Both Dr. Kornai and University of Arkansas's Dr. Daniel Berleant also foresee potentially catastrophic issues with autonomous financial algorithms that are employed to make money for their owners, without "human-centric" goals. Dr. Dastani worried that the computational capacity of intelligent machines will outpace that of humans. He states that "the increasing interactions between autonomous computer systems may cause unpredictable, not traceable, and perhaps undesirable outcomes." Financial regulations and a reworking of the structure of our economy are the most obvious solutions to these issues, but it would seem that the complex details remain to be adequately discussed and debated, especially in public political playing fields.

In line with fears often read about in the media, both anti-killer robot activist Dr. Sharkey and Brandeis University's Dr. Michael Bukatin believe that autonomous machines, either superintelligences fighting themselves and obliterating us in the process or rampant autonomous armed conflict, pose a legitimate threat.

Another thought is that AI aren't evil (and never will be); instead, it's the humans behind the AI that are unpredictable and often untrustworthy, with short-sighted aims such as financial and political gains. Dr. Michael Shermer sees the likeliest risk of near-future AI in the near future involving "evil humans manipulating AI toward their ends, not evil AI itself, as no such thing will develop."

If autonomy and consciousness go hand-in-hand, then Dr. Eduardo Torres Jara believes the former is the greater threat - though he doesn't see this happening anytime in the near future. "It is hard to believe that AI will be an actual risk. Any advanced technology has its own risks. For example, the flight control of the space shuttle can fail and generate an accident; however, the technology used to control the space shuttle itself is not dangerous. In the case of robots, we might not want to have weaponized autonomous robot because "autonomy" is not reliable enough even in robots with less fatal consequences in case of failure," says Torres.

While there are far fewer researchers on the opposite end of the spectrum, there are a few - including George Mason University's Dr. Robin D. Hanson - who see little to no risk at all of near-term AI threats. We might call this class the AI optimists.

Do we dare look beyond the next two decades into say, AI risks in the next 10 decades? Despite the open statements from some eminent businesspeople such as Bill Gates (and Elon Musk's well-publicized statements about AI), most researchers understandably get a bit more anxious about articulating such far-reaching risks, and some abstain from giving any answer. In some ways, this is a wise move; a persistent history lesson is that the future often turns out far different from what our minds are capable of imagining today.

On the other hand, it's our present imaginations and actions that help shape the future. While prevention efforts and solutions to AI risk may stay grounded in the near-term, it doesn't seem wholly irrational to throw our gaze to a more distant future and imagine the potential darker paths that AI may take. For the most part, the same researchers tended to give similar though shorter answers to those AI risks that they envisioned as being relevant in the near-term.

From AI-influenced oppression via laws put in place by corrupt leaders, to destructive capitalism, to unexplained and unverifiable AI that we can't fully comprehend, the potential risks vary but are nonetheless disturbing. What few researchers doubt is that sooner or later, artificial intelligence will transform work and life for humanity - luckily it seems that we may have some time to steer it.

Popular in the Community

Close

What's Hot