The AI Wars: The Battle of the Human Minds to Keep Artificial Intelligence Safe

For all the media fear-mongering about the rise of artificial intelligence and the potential for malevolent machines in the future, a battle of the AI war has already begun. But this one is being waged by some of the most impressive minds within the realm of human intelligence today.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2015-12-17-1450382322-5337500-NIPS_AI_wars_battle_of_human_minds.jpg

For all the media fear-mongering about the rise of artificial intelligence and the potential for malevolent machines in the future, a battle of the AI war has already begun. But this one is being waged by some of the most impressive minds within the realm of human intelligence today.

At the start of 2015, few AI researchers were worried about AI safety, but that all changed quickly, as AI exploded onto the scene with luminaries of science and technology warning about the potential for great harm. The year also saw the creation and growth of new AI research institutes and funding opportunities. Many in science and industry have joined the AI-safety-research-is-needed camp, but there are still some stragglers of staggering intellect. So just what does the debate still entail?

A symposium with some of the greatest AI minds in research and industry took place at the Neural Information Processing Systems conference this month that provides some insight.

[Author's note: The following are symposium highlights grouped together by topic to inform about arguments in the world of AI research. The discussions did not necessarily occur in the order below.]

What is AGI and should we be worried about it?
Artificial general intelligence (AGI) is the term given to artificial intelligence that would be, in some sense, equivalent to human intelligence. It wouldn't solve just a narrow, specific task, as AI does today, but would instead solve a variety of problems and perform a variety of tasks, with or without being programmed to do so. That said, it's not the most well defined term. As the director of Facebook's AI research group, Yann LeCun stated, "I don't want to talk about human-level intelligence because I don't know what that means really."

If defining AGI is difficult, predicting if or when it will exist is nearly impossible. Some of the speakers didn't want to waste time considering the possibility of AGI since they consider it to be so distant. LeCun and Andrew Ng, a Stanford professor and Chief Scientist of Baidu, both referenced the likelihood of another AI winter, in which, after all this progress, scientists would hit a research wall that would take some unknown number of years or decades to overcome.

However, many of the participants disagreed with LeCun and Ng, emphasizing the need to be prepared in advance of problems, rather than trying to deal with them as they arise.

Shane Legg, co-founder of Google's DeepMind, argued that the benefit of starting safety research now is that it will help us develop a framework that will allow researchers to move in a positive direction toward the development of smarter AI. "In terms of AI safety, I think it's both overblown and underemphasized," he said, commenting on how profound -- both positively and negatively -- the societal impact of advanced AI could be. "Being prepared ahead of time is better than trying to be prepared after you already need some good answers."

Gary Marcus, Director of the NYU Center for Language and Music, added, "We don't just need to prepare for AGI, we need to prepare for better AI [... Already, issues of security and risk have come forth."

It's the economy
Among all of the AI issues debated by researchers, the one that received almost universal agreement was the detrimental impact AI could have on the job market. Erik Brynjolfsson, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, explained that we're in the midst of incredible technological advances, which could be highly beneficial, but our skills, organizations and institutions aren't keeping up. Because of the huge gap in pace, business as usual won't work.

Ng quickly became one of the strongest advocates for tackling the economics issue. "I think the biggest challenge is the challenge of unemployment," he said.

In fact, the issue of unemployment is one that is already starting to appear, even with the very narrow AI that exists today. Around the world, low- and middle-skilled workers are getting displaced by robots or software, and that trend is expected to increase at rapid rates.

"Technology has always been destroying jobs, and it's always been creating jobs," Brynjolfsson admitted, but he also explained that the current exponential rate of technological progress is unlike anything we've ever experienced in history.

The possibility of a basic income and paying people to go back to school were both brought up as possible solutions. However, solutions like these will only work if political leaders begin to take the initiative soon to address the unemployment issues that near-future artificial intelligence will trigger.

Closing arguments
Ideally, technology is just a tool that is not inherently good or bad. Except if AI develops the capacity to think, this definition isn't quite accurate: at that point, the AI isn't a person, but it isn't just an instrument either.

Ian Kerr, the Research Chair of Ethics, Law, and Technology at the University of Ottawa, spoke early in the symposium about the legal ramifications (or lack thereof) of artificial intelligence. The overarching question for an AI gone wrong is: who's to blame? Who will be held responsible when something goes wrong?

The panelists all agreed that regardless of how smart AI might become, it will happen incrementally, rather than as the "event" that is implied in so many media stories. We already have machines that are smarter and better at some tasks than humans, and that trend will continue.

For now, as Harvard Professor Finale Doshi-Velez pointed out, we can control what we get out of the machine: if we don't like or understand the results, we can reprogram it.

But how much longer will that be a viable solution?

Coming soon...
The article above highlights some of the discussion that occurred between AI researchers about whether or not we need to focus on AI safety research. Because so many AI researchers do support safety research, there was also much more discussion during the symposium about which areas pose the most risk and the most potential. We'll be starting a new series in the new year that goes into greater detail about different fields of study that AI researchers are most worried about and most excited about.

[Note: This version was shortened for the Huffington Post. Read the complete article here, including many more fascinating quotes by some of the top minds in the field of AI.]

Popular in the Community

Close

What's Hot