A moral case for the use of autonomous weapons systems

Erich Reisen, writing in the Journal of Military Ethics:

I contend that a state and its agents would avoid exposing their own troops to unnecessary moral, psychological, and lethal risk by deploying AWS, and that there is no other feasible way of achieving these decreased levels of risk. Therefore, a state and its agents are obligated to deploy technologically sophisticated AWS. A technologically sophisticated autonomous weapon is one that matches the average performance of human-in-the-loop systems (e.g., drones) when it comes to acting in accordance with the laws of war (e.g., distinctness, surrender, proportionality). . . . Utilizing such systems would reduce psychological risk by reducing the number of humans on the ground (or in Nevada) making life and death decisions. Fewer pilots and soldiers means less psychological harm.

The Moral Case for the Development of Autonomous Weapon Systems

Reisen starts with the premise (previously developed) that it is moral to use uninhabited vehicles (i.e. drones) in just military actions because states have an obligation to protect their soldiers. He then extends the notion of protection to moral and psychological well-being.

It is an interesting and provocative argument given the major assumptions of (1) a just military action; and (2) sophisticated autonomous systems capable of matching human-in-the-loop systems on adherence to the rules of war.