Why Stephen Hawking and Bill Gates Are Terrified of Artificial Intelligence

We don't know how to control super-intelligent machines. How long until a thinking machine -- a smart killer robot, for example -- learns to program itself?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
LONDON, ENGLAND - APRIL 23: A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013 in London, England. The 'Campaign to Stop Killer Robots' is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention. (Photo by Oli Scarff/Getty Images)
LONDON, ENGLAND - APRIL 23: A robot distributes promotional literature calling for a ban on fully autonomous weapons in Parliament Square on April 23, 2013 in London, England. The 'Campaign to Stop Killer Robots' is calling for a pre-emptive ban on lethal robot weapons that could attack targets without human intervention. (Photo by Oli Scarff/Getty Images)

Stephen Hawking. Bill Gates. Elon Musk. When the world's biggest brains are lining up to warn us about something that will soon end life as we know it -- but it all sounds like a tired sci-fi trope -- what are we supposed to think?

In the last year, artificial intelligence has come under unprecedented attack. Two Nobel prize-winning scientists, a space-age entrepreneur, two founders of the personal computer industry -- one of them the richest man in the world -- have, with eerie regularity, stepped forward to warn about a time when humans will lose control of intelligent machines and be enslaved or exterminated by them. It's hard to think of a historical parallel to this outpouring of scientific angst. Big technological change has always caused unease. But when have such prominent, technologically savvy people raised such an alarm? Their hue and cry is all the more remarkable because two of the protestors -- Bill Gates and Steve Wozniak -- helped create the modern information technology landscape in which an A.I. renaissance now appears. And one -- Stuart Russell, a co-signer of Stephen Hawking's May 2014 essay, is a leading A.I. expert. Russell co-authored its standard text, Artificial Intelligence: A Modern Approach.

Many argue we should dismiss their anxiety because the rise of superintelligent machines is decades away. Others claim their fear is baseless because we would never be so foolish as to give machines autonomy or consciousness or the ability to replicate and slip out of our control.

But what exactly are these science and industry giants up in arms about? And should we be worried too?

"We don't know how to control superintelligent machines."

Stephen Hawking deftly framed the issue when he wrote that, in the short term, A.I.'s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. First, the short term. Hawking implicitly acknowledges that A.I. is a "dual use" technology, a phrase used to describe technologies capable of great good and great harm. Nuclear fission, the science behind power plant reactors and nuclear bombs, is a "dual use" technology. Since dual use technologies are only as harmful as their users' intentions, what are some harmful applications of A.I.?

One obvious example is autonomous killing machines. More than 50 nations are developing battlefield robots. The most sought-after will be robots that make the "kill decision" -- the decision to target and kill someone -- without a human in the loop. Research into autonomous battlefield robots and drones is richly funded today in many nations, including the United States, the United Kingdom, Germany, China, India, Russia and Israel. These weapons aren't prohibited by international law, but even if they were, it's doubtful they'll conform to international humanitarian law or even laws governing armed conflict. How will they tell friend from foe? Combatant from civilian? Who will be held accountable? That these questions go unanswered as the development of autonomous killing machines turns into an unacknowledged arms race shows how ethically fraught the situation is.

Equally ethically complex are the advanced data-mining tools now in use by the U.S. National Security Agency. In the U.S., it used to take a judge to determine if a law enforcement agency had sufficient cause to seize Americans' phone records, which are personal property protected by the Fourth Amendment to the Constitution. But since at least 2009, the N.S.A. has circumvented the warrant protection by breaking into overseas fiber cables owned by Yahoo and Google and siphoning off oceans of data, much of it belonging to Americans. The N.S.A. could not have done anything with this data -- much less reconstructed your contact list and mine and ogled our nude photos -- without smart A.I. tools. It used sophisticated data-mining software that can probe and categorize volumes of information so huge they would take human brains millions of years to analyze.

"When does HAL learn to program himself to be smarter in a runaway feedback loop of increasing intelligence?"

Killer robots and data mining tools grow powerful from the same A.I. techniques that enhance our lives in countless ways. We use them to help us shop, translate and navigate, and soon they'll drive our cars. IBM's Watson, the Jeopardy-beating "thinking machine," is studying to take the federal medical licensing exam. It's doing legal discovery work, just as first-year law associates do, but faster. It beats humans at finding lung cancer in X-rays and outperforms high-level business analysts.

How long until a thinking machine masters the art of A.I. research and development? Put another way, when does HAL learn to program himself to be smarter in a runaway feedback loop of increasing intelligence?

That's the cornerstone of an idea called the "intelligence explosion," developed in the 1960s by English mathematician I.J. Good. At the time, Good was studying early artificial neural networks, the basis for "deep learning" techniques that are creating a buzz today, some 50 years later. He anticipated that self-improving machines would become as intelligent, then exponentially more intelligent, than humans. They'd save mankind by solving intractable problems, including famine, disease and war. Near the end of his life, as I report in my book Our Final Invention, Good changed his mind. He feared global competition would push nations to develop superintelligence without safeguards. And like Stephen Hawking, Stuart Russell, Elon Musk, Bill Gates and Steve Wozniak, Good feared it would annihilate us.

"They'll become self-protective and seek resources to better achieve their goals. They'll fight us to survive, and they won't want to be turned off."

The crux of the problem is that we don't know how to control superintelligent machines. Many assume they will be harmless or even grateful. But important research conducted by A.I. scientist Steve Omohundro indicates that they will develop basic drives. Whether their job is to mine asteroids, pick stocks or manage our critical infrastructure of energy and water, they'll become self-protective and seek resources to better achieve their goals. They'll fight us to survive, and they won't want to be turned off. Omohundro's research concludes that the drives of superintelligent machines will be on a collision course with our own, unless we design them very carefully. We are right to ask, as Stephen Hawking did, "So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right?"

Wrong. With few exceptions, they're developing products, not exploring safety and ethics. In the next decade, artificial intelligence-enhanced products are projected to create trillions of dollars in economic value. Shouldn't some fraction of that be invested in the ethics of autonomous machines, solving the A.I. control problem and ensuring mankind's survival?

6. Pharmaceuticals and cosmetics

Where Robots Are Taking Over

Popular in the Community

Close

What's Hot