Experts Weigh In On Autonomous Weapons

Experts Weigh In On Autonomous Weapons
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
Shutterstock

FLI’s Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but he’s also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a he’s a Spokesperson for the Campaign to Stop Killer Robots.

The following interview has been edited for brevity, but you can read it in its entirety here or listen to it here.

ARIEL: Dr. Roff, I’d like to start with you. With regard to the database, what prompted you to create it, what information does it provide, how can we use it?

ROFF: The main impetus behind the creation of the database [was] a feeling that the same autonomous or automated weapons systems were brought out in discussions over and over and over again. It made it seem like there wasn’t anything else to worry about. So I created a database of about 250 autonomous systems that are currently deployed [from] Russia, China, the United States, France, and Germany. I code them along a series of about 20 different variables: from automatic target recognition [to] the ability to navigate [to] acquisition capabilities [etc.].

It’s allowing everyone to understand that autonomy isn’t just binary. It’s not a yes or a no. Not many people in the world have a good understanding of what modern militaries fight with, and how they fight.

ARIEL: And Dr. Asaro, your research is about liability. How is it different for autonomous weapons versus a human overseeing a drone that just accidentally fires on the wrong target.

ASARO: My work looks at autonomous weapons and other kinds of autonomous systems and the interface of the ethical and legal aspects. Specifically, questions about the ethics of killing, and the legal requirements under international law for killing in armed conflict. These kinds of autonomous systems are not really legal and moral agents in the way that humans are, and so delegating the authority to kill to them is unjustifiable.

One aspect of accountability is, if a mistake is made, holding people to account for that mistake. There’s a feedback mechanism to prevent that error occurring in the future. There’s also a justice element, which could be attributive justice, in which you try to make up for loss. Other forms of accountability look at punishment itself. When you have autonomous systems, you can’t really punish the system. More importantly, if nobody really intended the effect that the system brought about then it becomes very difficult to hold anybody accountable for the actions of the system. The debate — it’s really kind of framed around this question of the accountability gap.

ARIEL: One of the things we hear a lot in the news is about always keeping a human in the loop. How does that play into the idea of liability? And realistically, what does it mean?

ROFF: I actually think this is just a really unhelpful heuristic. It’s hindering our ability to think about what’s potentially risky or dangerous or might produce unintended consequences. So here’s an example: the UK’s Ministry of Defense calls this the Empty Hangar Problem. It’s very unlikely that they’re going to walk down to an airplane hangar, look in, and be like, “Hey! Where’s the airplane? Oh, it’s decided to go to war today.” That’s just not going to happen.

These systems are always going to be used by humans, and humans are going to decide to use them. A better way to think about this is in terms of task allocation. What is the scope of the task, and how much information and control does the human have before deploying that system to execute? If there is a lot of time, space, and distance between the time the decision is made to field it and then the application of force, there’s more time for things to change on the ground, and there’s more time for the human to basically [say] they didn’t intend for this to happen.

ASARO: If self-driving cars start running people over, people will sue the manufacturer. But there’s no mechanisms in international law for the victims of bombs and missiles and potentially autonomous weapons to sue the manufacturers of those systems. That just doesn’t happen. So there’s no incentives for companies that manufacture those [weapons] to improve safety and performance.

ARIEL: Dr. Asaro, we’ve briefly mentioned definitional problems of autonomous weapons — how does the liability play in there?

ASARO: The law of international armed conflict is pretty clear that humans are the ones that make the decisions, especially about a targeting decision or the taking of a human life in armed conflict. This question of having a system that could range over many miles and many days and select targets on its own is where things are problematic. Part of the definition is: how do you figure out exactly what constitutes a targeting decision, and how do you ensure that a human is making that decision? That’s the direction the discussion at the UN is going as well. Instead of trying to define what’s an autonomous system, what we focus on is the targeting decision and firing decisions of weapons for individual attacks. What we want to acquire is meaningful human control over those decisions.

ARIEL: Dr. Roff, you were working on the idea of meaningful human control, as well. Can you talk about that?

ROFF: If [a commander] fields a weapon that can go from attack to attack without checking back with her, then the weapon is making the proportionality calculation, and she [has] delegated her authority and her obligation to a machine. That is prohibited under IHL, and I would say is also morally prohibited. You can’t offload your moral obligation to a nonmoral agent. So that’s where our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.

ARIEL: Is there anything else you think is important to add?

ROFF: We still have limitations of AI. We have really great applications of AI, and we have blind spots. It would be really incumbent on the AI community to be vocal about where they think there are capacities and capabilities that could be reliably and predictably deployed on such systems. If they don’t think that those technologies or applications could be reliably and predictably deployed, then they need to stand up and say as much.

ASARO: We’re not trying to prohibit autonomous operations of different kinds of systems or the development and application of artificial intelligence for a wide range of civilian and military applications. But there are certain applications, specifically the lethal ones, that have higher standards of moral and legal requirements that need to be met.

Popular in the Community

Close

What's Hot