In 1942, Isaac Asimov formulated the Three Laws of Robotics, which would define and unify Asimov's robotic based-science-fiction. The first and cardinal rule of the Three Laws was that "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Asimov used these guidelines to create complex moral conundrums that drove forward science fiction, and sparked countless imaginations. However, the dreams of Asimov are entering reality, as over the past decade there has been an explosive growth in the use of unmanned armed robotic vehicles to reshape warfare. A number of powerful nations are at the cusp of developing fully autonomous weapons, capable of choosing and firing on targets on their own, without human intervention.
Unfortunately for us, unlike Asimov's society, there are no defined rules for our "killer robots." Some fear for a "robot arms race," as the high tech militaries of the world abandon policies of restraint and pursue ruthless development. In order to intervene in this field before investments, technological momentum, and new military doctrine make change difficult, the United Nations convened the Convention on Conventional Weapons (CCW). Held in Geneva this week, the convention played host to the Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), and will be a vital influence on the final mandate to be developed later in November.
The hope of many attending the meeting of experts is that the meetings can be an important pre-emptive step to banning LAWS before they get out of control. Over the course of the week, a clear consensus developed within the delegate body: it is morally and legally unacceptable for robots to kill people without human supervision.
The CCW was adopted by the UN in 1980, and has been ratified by over one hundred countries. When first penned, the convention had five protocols, banning some of the most egregious aspects of modern warfare. A potential sixth protocol would most like be the crowning achievement of the CCW, and would have a dramatic impact on the future of war.
The central discussion point at the meetings in Geneva has been defining "meaningful human control." What is the amount of human involvement that is both legally and morally necessary in targeting and attack decisions? Some nations, like Japan, are explicit about where they draw the line on "meaningful human control" - no robot should have "humans out of the loop." However, the vast majority of the rest of the world is still hesitant. While still far separated from LAWS, the drone program of the United States has garnered significant criticism over the past decade, and has become a pillar of United States military policy. Very few nations have taken similar positions to Japan, leaving the conversation vague and inconsistent.
Human Rights Watch released a report just before the CCW meeting, arguing that autonomous weapons would muddle the legal waters of war, making it challenging to attribute legal responsibility for deaths caused by such systems.
As the report notes: "A variety of legal obstacles make it likely that humans associated with the use or production of these weapons - notably operators and commanders, programmers and manufacturers - would escape liability for the suffering caused by fully autonomous weapons.
As the debate continues, the desire is that the international security community can come to a formalized treaty within the next two years. Defining the terms has been arduous, but the progress made at the most recent meeting has been great. The path will be set this November, as the CCW will convene for their annual meeting. While it is a topic that may not capture headlines, these discussions at the UN may have enormous ramifications on the future of the global community.