A.I. Expert Sounds Alarm Over 'Killer Robots'

A.I. Expert Sounds Alarm Over 'Killer Robots'

Imagine an army of killer robots capable of tracking you down and taking you out -- all without any oversight from human handlers.

That might sound like something out of science fiction, but a leading computer science expert worries that such "lethal autonomous weapons systems" might soon become science fact.

"Technologies have reached a point at which the deployment of such systems is -- practically if not legally -- feasible within years, not decades," Dr. Stuart Russell, professor of computer science and engineering at the University of California, Berkeley, wrote recently in Nature. "Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenseless. This is not a desirable future."

Just last week, MIT released a video of its remarkable robotic cheetah, which can "see" and jump over obstacles in its path without any input from its human handlers.

(Story continues below.)

But that's just one of several advanced bots that will give you pause when you realize just how lethal weaponized versions of them would be.


This velociraptor-like "Raptor" robot, developed in Korea, can run faster than Usain Bolt.


These University of Pennsylvania "nano quadrotors," can engage in complex social movements like swarming and pattern formation.

Russell isn't condemning all manifestations of artificially intelligent robots. He just doesn't think the decision to kill should be solely in the hands of a machine.

"Ideally we should have a treaty banning development and deployment of lethal autonomous weapons systems," he told The Huffington Post in an email. He's calling for professional societies to adopt official policies to address advances that could lead to dangerous robots.

"We should understand the ways in which knowledge can be used for harm and do our best to prevent it," he said in the email. "Although it is not the case at present, it will soon be the case that 'safe and beneficial' A.I. is just an intrinsic part of the field, just as ... 'not falling down' is an intrinsic part of bridge design."

And Russell isn't the only one calling for such oversight. Last April, Human Rights Watch released a report encouraging policymakers to ban the development and production of autonomous weapons.

“A fully autonomous weapon could commit acts that would rise to the level of war crimes if a person carried them out, but victims would see no one punished for these crimes,” Dr. Bonnie Docherty, senior arms division researcher at the organization and the report’s lead author, said in a written statement. “Calling such acts an ‘accident’ or ‘glitch’ would trivialize the deadly harm they could cause.”

Russell noted that we're still far from developing machines that could become our overlords -- the kind which Stephen Hawking has warned about.

But if we do end up building such machines, "arming them seems like a really bad idea," Russell said.

Before You Go

Devices Of The Future

Popular in the Community

Close

What's Hot