Results of the UN CCW Meeting on Killer Robots

Results of the UN CCW Meeting on Killer Robots
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

This past week, state delegates to the United Nations (UN) convened under the auspices of the Convention on Conventional Weapons (CCW) to listen to experts discuss the moral, legal and operational issues pertaining to lethal autonomous weapons systems. Civil society, also made its presence known with representatives from the International Committee for the Red Cross (ICRC), Human Rights Watch (HRW), the International Committee for Robot Arms Control (ICRAC), Article 36, the Campaign to Stop Killer Robots, Mines Action Canada, PAX, the Women's International League for Peace and Freedom, and many others. Indeed, as some civil society delegates noted, this was the widest turnout of civil society in twenty years.

After four days of expert meetings, concomitant "side events" organized by the Campaign to Stop Killer Robots, and informal discussions in the halls of the UN, the conclusions are clear: lethal autonomous weapons systems deserve further international attention, continued action to gain prohibition, and without regulation may prove a "game changer" for the future waging of war.

While some may think this meeting on future weapons systems is a result of science fiction or scare mongering, the brute fact that the first multilateral meeting on this matter is under the banner of the UN, and the CCW in particular, shows the importance, relevance and danger of these weapons systems in reality. Even more telling is the consensus that states are opposed to "fully autonomous weapons."

Now for the bad news: this meeting was a (crucial) first step, but many more steps will be required to gain an absolute and comprehensive ban of these systems. Moreover, as Nobel Peace laureate Jody Williams noted in her side event speech, the seeming consensus may be a strategic stalling tactic to assuage the worries of civil society and drag out or undermine the process. When pushed on the matter of lethal autonomous systems, there were sharp divides between proponents and detractors, and these divisions, not surprisingly fell on lines of state power. Those who supported their creation, development and deployment came from a powerful and select few, and many of those experts citing their benefits also were affiliated in some way or another with those states. The narrative this tells, of course, is Thucydides all over again: the powerful do what they can and the weak suffer what they must.

There is hope, however, in the collective power and action of smaller and medium states, as well as through the collective voice of civil society. Indeed, invoking the Merten's Clause as a potential legal justification to ban lethal autonomous systems implicitly and explicitly notes the power of public conscience. Many states and civil society delegates raised this potential avenue, thereby challenging some of the experts' opinions that the Merten's Clause would be insufficient or inapposite as a source of law for a ban.

The meetings also surprised and pleased many to see that ethics was even on the table. Serious questions about the possibility of accountability, liability and responsibility arise from autonomous weapons systems, and such questions must be addressed before their creation or deployment. Paying homage to these moral complexities, states embraced the language of "meaningful human control" as an initial attempt to address these very issues. For any system must be under human control, but the level of control, and the likelihood for abuse or perverse outcomes must be addressed now, and not, after the systems are deployed. Thus in the coming months and years, states, lawyers, civil society and academics will have their hands full trying to elucidate what "meaningful human control" entails, and how once agreed upon, can be verified when states undertake to use such systems.

From my perspective, as a representative of ICRAC, a speaker at one of the side events, and an academic studying these issues, the meetings gave me hope that we might be able to preemptively ban such terrifying and morally abhorrent weapons systems before they start killing, destroying, capturing, maiming or used as a tool to violate human rights. The excellent work must continue, and while this small victory tastes sweet today, we cannot let it satisfy our sensibilities and turn bitter. For the moral, legal and operational issues raised by lethal autonomous weapons push our way of thinking and our commitments to law and human rights to the brink. As Hannah Arendt once claimed, radical evil is the result, not of evil intent, but from a very lack of intent (a lack of thinking). To delegate the decision to wage war and to kill to a machine is the highest example of Arendt's radical evil: for it means we have willingly accepted to be unthinking and ignorant of the horror and atrocities that these systems will surely commit in our names.

Popular in the Community

Close

What's Hot