Google's New A.I. Ethics Board Might Save Humanity From Extinction

Google's New A.I. Ethics Board Might Save Humanity From Extinction

In 2011, the co-founder of DeepMind, the artificial intelligence company acquired this week by Google, made an ominous prediction more befitting a ranting survivalist than an award-winning computer scientist. “Eventually, I think human extinction will probably occur, and technology will likely play a part in this,” DeepMind’s Shane Legg said in an interview with Alexander Kruel. Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the “number 1 risk for this century.” Google’s acquisition of DeepMind came with an estimated $400 million price tag and an unusual stipulation that adds extra gravity -- and a dose of reality -- to Legg’s warning: Google agreed to create an AI safety and ethics review board to ensure this technology is developed safely, as The Information first reported and The Huffington Post confirmed. (A Google spokesman said that DeepMind had been acquired, but declined to comment further.) Even for a company that predictably pursues unpredictable projects (see: Internet-deploying balloons), an AI ethics board marks a surprising first for Google, and has some people questioning why Google is so concerned with the morality of this technology, as opposed to, say, the ethics of reading your email.

Reading your email may be abhorrent. But AI, according to Legg and sober minds at the University of Cambridge, could pose no less than an "extinction-level" threat to "our species as a whole."

Advances in AI could one day create computers as smart as humans, ending our powerful reign as the planet’s most intelligent beings and leaving us at the mercy of superintelligent software that, designed incorrectly, could threaten our very survival. As James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, notes, we've ruled because we're the smartest creature out there, but "when we share the planet with a creature smarter than we are, it’ll steer the future."

Before we get there, ethicists, AI researchers and computer scientists argue Google’s soon-to-be-created ethics board must consider both the moral implications of the AI projects it pursues, and draw up the ethical rules by which its smart systems operate. The robot overlords are almost certainly coming. At least Google can ensure they're merciful.

"If, in the future, a machine radically surpassed us in intelligence, it would also be extremely powerful, able potentially to shape the future and decide whether there are any more humans or not,” explains Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford. "You need to set up the initial conditions in just the right way so that the machine is friendly to humans.”

Artificial intelligence, a generic term that encompasses over a dozen specialized sub-fields, is already powering everything from Google’s self-driving cars and speech-recognition systems to its virtual assistant and search results. (DeepMind's technology, which applies a form of AI known as "deep learning" to gaming and e-commerce, will reportedly be incorporated into search.) While today’s AI software is still worse than a toddler at simple tasks like recognizing a cat or deciphering a phrase, it’s poised for exponential improvement that, within a few decades, could have AI diagnosing patients, writing best-sellers and putting our own brains to shame.

AI experts, who hail the board as a milestone for their field, hope Google’s committee will both steer the company away from morally suspect applications of AI technology and probe the social repercussions of the products it opts to develop. Imagine, for example, if Google chose to implant sophisticated, artificially intelligent brains in the industrial robots it acquired with the purchase last year of Boston Dynamics, a firm that worked closely with the U.S. military. Google might be able to build the most sophisticated robot soldiers on the market, paving the way for man and machine to fight shoulder-to-shoulder in battle. But should Google be in the killer-bot business? Would launching these smart slaughter-machines for the military violate Google’s “do no evil” code? Or would it be more unethical to let human soldiers die in combat when robots could take their place? These are the kinds of questions the AI arbiters might grapple with, along with issues like whether to pursue automation that would put millions out of work, or whether facial recognition could threaten individual autonomy. Together with input from other AI researchers, Barrat has developed a wishlist of five policies he hopes Google's safety board will adopt to ensure the applications of AI are ethical. These include creating guidelines that determine when it's "ethical for systems to cause physical harm to humans," how to limit "the psychological manipulation of humans" and how to prevent "the concentration of excessive power." With AI poised to reason, learn, create, drive, speak, comfort and possibly even decide who dies, the world also needs a moral code for the computer code.

As Gary Marcus has noted in the New Yorker, sophisticated AI systems, such as self-driving cars, will increasingly face difficult moral decisions, like choosing whether to crash a school bus carrying kids, or risk harming the passenger the car has onboard. Software will have to be programmed to behave by a set of ethical principles, which the AI committee could help conceive. “People ridicule terminator scenarios in which machines actively oppose us or disregard us. I don’t think we can afford to ignore those things or laugh them off,” says Marcus, a psychology professor at New York University and author of Kluge: The Haphazard Construction of the Human Mind. “In a driverless car -- or in any machine that has a certain power to control the world -- you want to make sure that the machine makes ethical decisions.” The technical challenges of that are daunting. But even more complex may be deciding whose values inform the moral code of the intelligent machines who could be our teachers, caretakers and chauffeurs. As it is, Google's definition of proper and ethical behavior doesn't always jive with the rest of the world's -- just ask the FTC, the House, the German government, France's judges, and privacy commissioners from seven countries... While Google might be A-OK with, say, an AI virtual assistant that carries on "Her"-like trysts with married users, others of us might want to bar computers from being home-wreckers. But then who would we trust to develop a "10 commandments" for ethical AI? Do we trust governments to bear that responsibility? Religious leaders? Academics? Whoever decides will likely impact human life as much as the workings of the AI.

And even then, ensuring the ethics board has real influence within Google could be another issue. Sources familiar with the DeepMind acquisition noted that the startup's shareholders have some power to hold Google to its promise: Dismissing the guidance of the AI safety and ethics board could reportedly prompt legal action over a violation of the terms of the sale.

A handful of DeepMind funders and founders -- including co-founders Legg and Demis Hassabis, and backers Jaan Tallinn and Peter Thiel -- have consistently worked to raise awareness about the potential risks of uncontrolled AI development, so there's reason to believe they will urge Google to abide by its new board. But if for some reason Google does ignore the safety committee's advice, and our artificially intelligent overlords do one day try to exterminate us, Legg predicts there'd be a silver lining to our species' demise.

"If a superintelligent machine (or any kind of superintelligent agent) decided to get rid of us," he wrote, "I think it would do so pretty efficiently."

Popular in the Community

Close

What's Hot