Holding Killer Robots Accountable? The New Moral Challenge of 21st Century Warfare
More and more military robots are being programmed to kill. The hope is that the absence of emotions and promotion of technical precision will save lives. But officials also expect these same robots to become fully autonomous, or capable of making their own decisions, when it comes to killing. The possibility that they may begin to kill indiscriminately is a scenario that more and more military strategists and ethicists are taking seriously.
The most powerful countries's military strategists are betting that autonomous robotic weapons will give them an advantage militarily. In fact, the U.S., Britain, and China have already begun research on the development of new Lethal Autonomous Weapons Systems (LAWS), or advanced robotic weapons systems that carry their own sense detectors. In 2015, the United States, for instance, unveiled the design and section of the X-47B, a new pod-shaped aircraft that can be autonomously refueled in mid-air, while Britain, not to be outdone, is working on the Tauris aircraft equipped with automatic laser sensors. With nearly USD 72 billion invested in such technology, the U.S. continues to maintain a position that such special technology poses few risks to civilians. More importantly, it will allow the US to better protect itself from outside threats.
However, it is also true that such technologies may incur greater human collateral damage. Indeed, software malfunction or programming errors have already exposed the limitations of LAWS technology. Signs of this threat have surfaced from earlier incidents involving limited-supervision LAWS, or semi-autonomous LAWS. In 2007, a South African semi-autonomous anti-aircraft system accidently fired upon and killed 7 South African soldiers; and in 1988, the U.S. air defense system (ADY) mistakenly shot down an Iranian passenger airline jet. Both incidents raise the question of whether we can afford to ignore the moral and political fallout of producing LAWS.
It is a question that has also called attention to the thorny issue of whether killer robots can ever be held accountable for their actions. Indeed, with no human at the helm, or person whose emotions and conscious actions can be targeted/traced, it becomes increasingly unclear as to how to prosecute the destructive actions of robots. One option is to file civil charges, effectively holding the civil programmers of these robots liable for damages. This is not likely to curb the destructive actions of killer robots, since proof is required that the maker had knowledge of a programming defect. With no reliable criteria for establishing intent, then, criminal accountability of LAWS continues to remain a pressing issue, particularly given the U.S.'s plan of making robots fully autonomous within the next ten years.
In this case, any perceived military advantage will beg the question of who holds responsibility; if fully autonomous robots lack emotions and mental states (conscious thoughts), there will be little or no legal basis for establishing direct and even command responsibility (that commanders had the fore knowledge to prevent such destructive outcomes). The result is that the guilt and intent of a growing population of robot killers will become increasingly displaced within the corpus of international criminal law. This, in turn, will defy the rapid evolution and efficacy of international criminal law and its many rules of procedure for determining intent and knowledge of war criminals. The International Criminal Court (ICC) and International Criminal Tribunals, for example, have brought hundreds of war criminals to justice and arguably helped deter criminal behavior. Such deterrent effects, which rely on the capacity of courts to tap into the mental state of perpetrators, cannot possibly apply to fully autonomous robots programmed to kill.
This has led many to proclaim an accountability gap between international criminal accountability and autonomous robots. For some, bridging this gap will require a complete ban on LAWS. Human Rights Watch (HRW) has been in the forefront of this movement, along with The Campaign to Stop Killer Robots, a coalition of nongovernmental organizations (NGOs) working to ban fully autonomous weapons. In a report issued in April 2015, HRW documented the rapid rise of many semi-autonomous weapons, arguing that regulation will do little to stop the destructive impact of fully autonomous killer robots. HRW lawyers and activists recently voiced their concerns at a delegate meeting of the Convention on Certain Conventional Weapons, an agreement signed by 121 countries that have pledged to eliminate weapons that indiscriminately kill civilians. The meeting did little to change the political reality of LAWS or the most powerful countries’ commitment produce better, more sophisticated LAWS. As the political strategists, Peter Singer and August Cole argue, it is far more realistic to erect new laws and rules to hold humans accountable for any lethal mistake made by the robots they produced. By clarifying which maker is and is not responsible, the hope is that authorities will adopt rules constraining the reckless behavior of states and corporations.
New ethical guidelines to regulate the moral conduct of those programming the robots should be developed. This involves first and foremost challenging the prevailing moral criterion that the most powerful countries have an obligation to use LAWS because of the few risks they pose to civilians. Such an obligation can only lead to more human collateral damage, which makes it morally unsustainable in light of the estimated 474 civilian deaths caused by drone strikes from 2009 to 2015. The priority now is to formulate moral criteria that will allow us to address the above accountability gap through flexible obligations, principles, and rules of cooperation. Conceptually, this would entail readapting just war theory to address the legal status of robots as a special type of combatant. Development of this framework will involve political costs and sacrifice, but this will result in a reward worth the cost.
Steven C. Roach is Associate Professor of International Relations at the School of Interdisciplinary Global Studies at the University of South Florida-Tampa. He has written several books and articles on ethics and international relations and is currently Co-editor of the new SUNY book series, “Ethics and the Challenges of Contemporary Warfare” Follow him on Twitter @sroach82