The Debate About Autonomous Weapons Systems

Publication Date: 
Monday, April 21, 2014

A seemingly historically unprecedented development has been taking shape within the past few years in the domain of international humanitarian law (IHL). For perhaps the first time in history, non-governmental organizations (NGOs), scholars, human rights advocates, and policy actors have been engaging in discussions and debates about the legal and ethical implications of the military use of technology that does not yet exist. In recent months, for example, the use of autonomous weapons systems has been the focus of events convened by the American Society of International Law (ASIL) and Chatham House. A recent issue of the International Review of the Red Cross — titled, “New technologies and warfare” — also examined this topic in depth.

Some existing weapons systems already function with a certain degree of autonomy. For example, landmines operate through a victim-activated trigger that does not require command authorization for detonation. For this reason, landmines raise questions about the principle of distinction under IHL. (See, for example, Rule 81 of the Study on Customary International Humanitarian Law conducted by the International Committee of the Red Cross.) Though anti-personnel mines have been banned by the 1997 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, which has been signed and ratified by 161 states, the use of anti-vehicle landmines — designed based on the assumption that these landmines can distinguish between people and vehicles — persists. Furthermore, certain anti-vehicle landmines use sensors that can allegedly distinguish between military and civilian vehicles. And recent technological advances have led to a wide array of weapons systems with autonomous aspects, including drones that can fly autonomously, as well as various types of weapons detailed in a 2013 report by the United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions.

However, the debates that have arisen in this domain have focused on weapons systems yet to be invented. As one commentator states of the recent ASIL event on autonomous weaponry, the “discussion was forward-looking, at times openly speculative, reflecting the nascent stage of the development of these weapons systems (…) [and focusing] largely [on] trying to assess weapons that do not yet exist, and that may not exist for decades.”

These discussions have coalesced around several crucial questions. First, is it even possible for an autonomous weapons system to adhere to IHL? Some commentators argue that these weapons could serve the aim of civilian protection by conducting more precise attacks on military targets. However, the powerful counterpoint, as noted by one author, is:

The problem for the principle of distinction is that we do not have an adequate definition of a civilian that we can translate into computer code. The laws of war does not provide a definition that could give a machine with the necessary information. The 1949 Geneva Convention requires the use of common sense, while the 1977 Protocol I defines a civilian in the negative sense as someone who is not a combatant.

Second, who would be accountable under IHL for violations that result from autonomous weaponry? Could one consider that the engineers and computer programmers who designed the weapons system should be held accountable? By removing human decision-making from the equation, autonomous weapons systems could pose a true conundrum for accountability.

Third, how can civil society actors and others best mobilize to grapple with the challenges posed by this technology? In particular, given that this technology does not yet exist, is this topic in fact worthy of advocacy attention, especially given the resource-scarce nature of the human rights and NGO communities? This question is actually as theoretical as discussions about yet-to-be-invented technologies. The drive to ban autonomous weapons systems has already begun (see, for example, advocacy efforts pursued by Human Rights Watch and by the United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions) and has already elicited counterarguments (see, for example, the work of Kenneth Anderson and Matthew Waxman). Much like the forward march of technology itself, the progression of this debate, already underway, cannot very easily be reversed.

Overall, these issues point to the relationship between the principles of IHL and the practicalities of real world implementation. On the one hand, IHL exhibits a certain degree of flexibility, allowing the law to apply to a wide array of different circumstances that fall under the umbrella of armed conflict. On the other hand, this flexibility makes IHL not easily translatable into a computer program that can allow a machine, without human interference, to conduct (and act upon) assessments regarding distinction, necessity, and proportionality. Additionally, IHL is based on the assumption that human beings conduct these assessments and should be held accountable for their decisions. In this sense, the move toward autonomous weapons systems could usher in a new phase of warfare that redefines the interaction between the principles of IHL and the practical realities of implementing these norms. The debates that have arisen about this topic reflect the differing views that exist regarding the extent to which the relationship between principles and practical application will be redefined, how different actors — including militaries, lawyers, and members of civil society — should prepare for and engage with this potential shift, and whether these developments will promote or threaten the enduring relevance of IHL as a normative framework for behavior during armed conflict. 

Add new comment

(If you're a human, don't change the following field)
Your first name.
(If you're a human, don't change the following field)
Your first name.
(If you're a human, don't change the following field)
Your first name.
To prevent automated spam submissions leave this field empty.

Plain text

  • No HTML tags allowed.
  • Allows content to be broken up into multiple pages using the separator: <!--pagebreak-->.
  • Allows breaking the content into pages by manually inserting <!--pagebreak--> placeholder or automatic page break by character or word limit, it depends on your settings below. Note: this will work only for CCK fields except for comment entity CCK fields.
  • Lines and paragraphs break automatically.
  • Web page addresses and e-mail addresses turn into links automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Enter the characters shown in the image.

Recent Tweets

Follow Us

Twitter icon
Facebook icon
LinkedIn icon
Vimeo icon
YouTube icon

Our Sponsor

A Program Of

All materials © 2014 Harvard University

Back to Top

Back to Top