Artificial Intelligence Loves Natural Naiveté

The military was excited about these for many more reasons though: smart mines can hop and sense, and therefore the government could argue they're not land mines, they're smart robots. That means they aren't subject to the international anti-personnel landmine treaties. And since they can move, they're much harder to find and disarm. Great for war. Not so great for cleaning up after war.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

The New York Times today published an op-ed by Jerry Kaplan: Robot Weapons, What's the Harm? In the article, Kaplan rebuts the recent open letter warning against the unchecked use of A.I. by the likes of Stephen Hawking and others; but his objections turn out to be, unfortunately, the same ones that have been making rounds on the blogosphere over the past several years.

He suggests that A.I. will be selective and therefore spare civilians. Remember smart bombs and their record in the first Gulf War? Selective weapons become highly hyped, and the very fact that they have new branding enables new levels of use. Give me a smart weapon, and now I can aim high explosives inside a residential neighborhood that could never have been a target, because my weapon is "smart" and fewer "collateral casualties" will occur. So will A.I. make each war strike more accurate? Will that simply increase the number of strikes tremendously, leading to ever more civilian involvement in war?

Kaplan's landmine example is a classic example of value hierarchy: convince me that a new technology is good by reminding me that another technology is much worse. I remember when factory automation was held up by robot automation companies as good, in spite of massive unemployment, because outsourcing to Asia is even worse! So the argument goes: land mines are so bad that A.I. must be good.

In fact, land mines are an excellent example of system-level effects, specifically in the case of A.I. True story: the Department of Defense for many years funded the development of smart land mines. These land mines would use computer vision, vibration sensors and machine learning to detect what sort of object was about to trigger the mine -- person, tank, car, et cetera. The mines also had hopping legs in these university research projects, so they could decide to explode, or just jump away and settle into new digs. Several of my robotics colleagues were proud to work on this project because, theoretically, the new A.I.-smart mines could be programmed to, say, avoid blowing up a child's leg. So they could be, in a theoretical sense, more humane (until the enemies wised up, emulated children, and the mines got reprogrammed of course).

The military was excited about these for many more reasons though: smart mines can hop and sense, and therefore the government could argue they're not land mines, they're smart robots. That means they aren't subject to the international anti-personnel landmine treaties. And since they can move, they're much harder to find and disarm. Great for war. Not so great for cleaning up after war. And will governments make more smart robots than internationally banned landmines? You bet they will. One small, real-world example of just what really happens when A.I. joins the war party. We have to think about systems and organizations; A.I. is an enabler of dramatic change in how we wage war, and simplistic suggestions that computers are unemotional and won't kill out of anger miss the mark quite entirely.

Now, back to that open letter. What of Hawking et al.'s claims that A.I. may pose us with an existential threat? As for their argument vis a vis war, I am in violent agreement. A.I. is not the right tool for waging war. But this is not so much because A.I. is incredibly superior to us; but rather because it is remarkably mediocre. Congressmen, military strategists and military funders consistently underestimate just how unintelligent Artificial Intelligence continues to be. The gap between what A.I. can really do and what the Singularity and exponential growth adherents proclaim continues to grow, and with it expands the disconnect between reality and blogospheric debate. A.I. is not conscious, and we really have no idea how far away that might be. But A.I. is now famous enough, and effective enough, to convince us to yield ever more decisions to computers. And this will mean more power in the hands of the few corporations who own A.I. and its key substrate: the world's digital information. A.I. will not threaten humanity existentially, but it will further amplify the already sickly gap between have's and have-nots; between corporations and underemployed adults; between governments and disempowered citizens.

Kaplan is right to ask the question: what's the harm? But his dismissive answer is off the mark. The truth is nuanced, rich and incredibly important for society to debate.

Popular in the Community

Close

What's Hot