Autonomous Drones and the Ethics of Future Warfare

In an environment where most individuals are not combatants (think: Baghdad or Kabul), autonomous weapons inability to assess individual intention make their presence on the battlefield an international legal liability.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

International human rights lawyers and military aviation specialists held their collective breath this summer when, 80 miles off the coast of Virginia, "Salty Dog 502" executed a flawless landing on the deck of the USS Bush nuclear aircraft carrier catching the third wire before coming to a clean halt. The successful maneuver which to the untrained eye appeared rather unexceptional, ushered in a new era for weapons systems and international humanitarian law as it marked the first time and unmanned, autonomous drone landed on an aircraft carrier.

Other large, first generation drones currently deployed by the CIA and Air Force (which robot expert Peter W. Singer likens to the "Model T Ford or the Wright Brothers' Flyer") require a human pilot operating a joystick to fly but "Salty Dog 502," the culmination of an eight-year, $1.4 billion military project, is designed to launch, land and refuel in midair without human intervention.

"It is not often you get a chance to see the future, but that us what we got to see today," Navy Secretary Ray Mabus said shortly after the autonomous demonstrator touched down. "We didn't have someone...with a stick and throttle and rudder to fly this thing," added program manager Rear Adm. Mat Winter. "We have automated routines and algorithms."

International human rights lawyers were not nearly as ebullient.

Human Rights Watch issued an unequivocal report last November calling for an absolute ban on the development, production and use of autonomous weapons systems. The report concluded, "such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death or injury to civilians during armed conflict."

A report by the Special Rapporteur to the United Nations issued in April, came to a similar conclusion stating, "[autonomous weapons] may seriously undermine the ability of the international legal system to preserve a minimum world order."

As it currently stands, international humanitarian law prohibits weapon systems that cannot follow the two cardinal rules of distinction and proportionality.

Developing useful systems that pass principle of distinction muster is particularly problematic for the U.S. which, for years, has been engaged in asymmetrical, urban counterinsurgencies, where enemies are often indistinguishable from civilians. Soldiers engage enemies only after observing subtle, contextual factors or taking direct fire. In an environment where most individuals are not combatants (think: Baghdad or Kabul), autonomous weapons inability to assess individual intention -- i.e., a butcher chopping meat in a busy market or a child playing with a toy gun -- make their presence on the battlefield an international legal liability.

Likewise, the proportionality of a military attack is predominately dictated by split-second, value-based judgments, limited by the requirement of "humanity." The sudden presence of a school bus, for instance, may change a human soldier's proportionality calculus, deterring him from engaging.

Human soldiers, however, are not perfect.

In the heat of the battle, technical indicators have, at times, proven more reliable than human judgment. In 1988, for instance, the USS Vincennes shot down an Iranian airliner after the warship's crew believed the aircraft was descending to attack when, in fact, computers on-board accurately indicated it was ascending to pass-by harmlessly.

And while lethal engagement can be restrained by human compassion it is just as often fueled by our basest instincts: rage and revenge. One need look no further than the civilian atrocities perpetrated by soldiers in Darfur, Rwanda, or Syria to see the possible effects of unchecked human emotions.

With U.S. Department of Defenses' stated goal of increasing the autonomy of weapons systems over the next decade, the real question then becomes how best to ensure compliance with customary international legal standards.

To start, human operators must be able to program the system's software with appropriate levels of doubt (or the likelihood that an object or person is a lawful target as well as the extent of potential collateral damage). In other words, the system would not target a person or object unless it could calculate within a sufficient, pre-determined threshold of, say 98 percent certainty, that it was engaging a lawful option.

Until facial recognition software can be developed to the point which enables autonomous weapons to accurately identify specific, individual targets, weapons must only be deployed to areas where everyone is a combatant. Moreover, accountability, in the form of statutory criminal liability, must be determined for commanders, supervisors or programmers who direct systems to engage unlawful targets.

In effect, these preliminary requirements would limit the deployment of autonomous weapons to instances where more precise or discriminatory alternative options that would cause less collateral damage are unavailable to achieve specific military objectives. Further international operational guidelines and review standards must follow as the sophistication of technology progresses.

Seneca once said, "[a] sword is never a killer, it is a tool in the killer's hand." One day soon, Salty Dog may question that assertion... quite literally.

An earlier version of this story appeared in U.S. News & World Report.

Popular in the Community

Close

What's Hot