This HuffPost Canada page is maintained as part of an online archive.

When U.S. Weapons are Autonomous, Who is Responsible?

The U.S. Department of Defense wants to enlarge the U.S. military's reliance on autonomous (i.e. self-directed) weapons in conflict. But a mission is not a person, it is a thing, and things cannot be held morally responsible. It is like saying that you want to hold your car responsible for breaking down on the way to work. You wouldn't say that your car "wronged" you, and you wouldn't seek to punish your car. Such a position on the ethics of autonomous systems reduces any questions of morality or responsibility.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Recently, the United States Department of Defense issued a report on increased autonomy in DoD weapons systems to understand what role, problems and benefits will come with the expanded use of self-directed weapons.

We are all familiar with the U.S.'s reliance on "drones" for surveillance and reconnaissance missions, as well as their use in targeting and killing suspected terrorists in countries like Yemen, Pakistan and Afghanistan. What is not typically noted is that the current autonomous weapons systems do not present any new legal or ethical problems.

Distanced killing or surveillance is functionally no different than sending a Tomahawk missile from an aircraft carrier or snooping from satellites in space. Questions of how they are used to kill American citizens abroad, or suspected terrorists in another country's borders are, of course, a separate matter. This most recent report, however, is not about the current technology, but the proposed trajectory for automation and the DoD's attempts to assuage the fears of those of us following its course.

Unsurprisingly, the DoD wants to enlarge the U.S. military's reliance on autonomous (i.e. self-directed) weapons in conflict, to advance the level of autonomous action capabilities of existing weapons and to create new autonomous systems. What is surprising is that the DoD realizes that the public and the weapons operators are uncomfortable with the goals of increasing autonomy.

So its new tactic is to shift the terms of the debate. It now claims that traditional definitions of autonomy as "self-directed" are "unhelpful," and that "autonomy is better understood as a capability (or set of capabilities) that enables the larger human-machine system to accomplish a given mission." What the DoD is doing is changing the discussion of increased autonomy of weapons to the "mission" and the "mission autonomy" (whatever that means). Previous attempts by various service branches to roadmap future levels of autonomy in weapons systems is, according to this new report, "counter-productive," as it only heightens the Terminator-style fears.

Even further still, though, the DoD claims that:

"casting the goal as creating sophisticated functions (i.e. more self-directedness) -- rather than creating a joint human-machine cognitive system -- reinforces the fears of unbounded autonomy and does not prepare commanders to factor their understanding of unmanned vehicle use that there exist no fully autonomous systems, just as there are no fully autonomous soldiers, sailors, airmen or Marines."

This position presents a nice little loophole with which to stop debate about increased autonomy in weapons systems. The critic says, "we worry about attributing responsibility to a weapon that decides to fire on a target by itself." The DoD responds "there is a human-machine cognitive system, and so don't worry, there is a human there!" But the question remains: where? How far removed is this person? The commander? The General? The President?

Moreover, as the above quote illustrates, this semantic slight of hand blurs the lines of moral and legal responsibility for killing in war, given that the DoD believes that no soldiers, sailors, airmen or Marines are fully autonomous. This only makes sense, if we work from a definition where the mission is the primary focus and that autonomy is defined purely in terms of the "capability" of fulfilling said mission.

Yet this is not what is usually meant by autonomy in everyday or philosophical use, nor how millennia of moral and legal systems have taken it to mean. Traditionally, we think of soldiers, sailors, airmen and Marines as autonomous because they are persons. That is, they have the capability for self-directed action. When they use this capability to choose their own course of action, and say break the laws of war, we hold them accountable for their actions (legally as well as morally).

The idea that these persons are not fully autonomous, says first that they cannot be held fully accountable. But second, it implies the systems that the DoD wants to exploit are also (if we read between the lines) incapable of responsibility attribution. We are not concerned with the system, or even the software designer or the commander; we are concerned with the "mission." A mission is not a person, it is a thing, and things cannot be held morally responsible. It is like saying that you want to hold your car responsible for breaking down on the way to work. You wouldn't say that your car "wronged" you, and you wouldn't seek to punish your car.

The result of all of this is that the DoD is attempting to side-step questions of morality and responsibility. It does not appear to endorse the programming of weapons with "ethical governors," that is rules that would prohibit these weapons from, say, targeting a civilian. Rather, it is endeavouring to redefine the notion of autonomy, and this confuses an already convoluted topic.

Case in point, the report further states:

"Treating unmanned systems as if they had sufficient independent agency to reason about morality distracts from designing appropriate rules of engagement and ensuring operational morality. Operational morality is concerned with the professional ethics in design, deployment and handling of robots. Many companies and program managers appear to treat autonomy as exempt from operational responsibilities."

Are we concerned with weapons obeying the laws of war (and morality) as we traditionally think of it, or are we concerned with software designers upholding a (rather nonexistent) professional ethics in design? By the by, such a professional ethics would basically amount to the software designer taking precautions against knowingly designing or fielding a product that would cause harm.

Now, these weapons are designed to harm, but the type of harm to be avoided would be negligent harm. Such a position on the ethics of autonomous systems not only reduces any questions of morality or responsibility to tort law and issues of liability, but it has the potential to divorce the idea of morality from the discussion. For instance, we might say that there is a professional ethics amongst a band of thieves, but we would not say that the activities of band of thieves are moral. To claim that the DoD, and thus the U.S. military, should focus on "operational" responsibility is like claiming that the band of thieves ought to focus on not ratting each other out.

Of course, we could be charitable to those inside the Beltway and claim that the DoD is sensitive to issues of ethics, and that by claiming that operational morality is important addresses the point. Those in charge of design, deployment and handling of robots are the ones who must act ethically, and who will be held accountable. But this just kicks the can again. It puts us back to our original question of who is actually responsible, how far removed that person is from the deployment of weapons that have the potential of making their own targeting decisions. This is so because, if we take the DoD at its word that not even persons are fully autonomous, then we are again back to the problem of definition and whether anyone can ever be held responsible for the use (or abuse) of these weapons.

Ultimately it appears that the DoD is not only going to try to exploit every opportunity to use unmanned systems, but it is also implicitly skirting the legal and moral questions raised by the deployment of such weapons by redefining what "autonomy" actually means and relying on "codes" of ethics that are not what we traditionally think of as ethical. It amounts to political prestidigitation and the DoD as rewriting ethical code on more than one level.

Close
This HuffPost Canada page is maintained as part of an online archive. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.