Artificial intelligence, Weapons Systems, and Human Control

Remy Maduit | Authors published

DEFENSE & SECURITY FORUM​


Artificial intelligence, Weapons Systems, and Human Control

Ingrid Bode is an Associate Professor of International Relations at the University of Southern Denmark and Principal Investigator of an ERC Research Project on Autonomous Weapons Systems and International Norms (AUTONORMS), Denmark
Hendrik Huelss is an Assistant Professor at the Center for War Studies, Department of Political Science and Public Management at the University of Southern Denmark, and Senior Researcher in the European Research Council (ERC) project AutoNorms (2020-2025), Denmark

Volume I, Issue 1, 2022
Defense & Security Forum
a Mauduit Study Forums’ Journal
Remy Mauduit, Editor-in-Chief

Bode, Ingvils & Huelss, Hendrik (2021) Artificial Intelligence, Weapons Systems, and Human Control, E-International Relations, ISSN 2053-8626.

ARTICLE INFO
Article history

Keywords
artificial intelligence
weapons’ systems
drones
human control
remote warfare

ABSTRACT
Using force exercised by the militarily most advanced states in the last two decades has been dominated by ‘remote warfare’, which, at its simplest, is a ‘strategy of countering threats at a distance, without the deployment of large military forces’. Although remote warfare comprises strange practices, academic research and the broader public pay much attention to drone warfare as a very visible form of this ‘new’ interventionism. Research has produced important insights into the various effects of drone warfare in ethical, legal, political, but also social, and economic contexts. But current technological developments suggest an increasing, game-changing role of artificial intelligence (AI) in weapons systems, represented by the debate on emerging autonomous weapons systems (AWS). This development poses a new set of important questions for international relations, which pertain to the impact that increasingly autonomous features in weapons systems can have on human decision-making in warfare—leading to highly problematic ethical and legal consequences.


Using force exercised by the militarily most advanced states in the last two decades has been dominated by ‘remote warfare’, which, at its simplest, is a ‘strategy of countering threats at a distance, without the deployment of large military forces’. [1] Although remote warfare comprises strange practices, academic research and the broader public pay much attention to drone warfare as a very visible form of this ‘new’ interventionism. Research has produced important insights into the various effects of drone warfare in ethical, legal, political, but also social, and economic contexts. [2] But current technological developments suggest an increasing, game-changing role of artificial intelligence (AI) in weapons systems, represented by the debate on emerging autonomous weapons systems (AWS). This development poses a new set of important questions for international relations, which pertain to the impact that increasingly autonomous features in weapons systems can have on human decision-making in warfare—leading to highly problematic ethical and legal consequences.

In contrast to remote-controlled platforms such as drones, this development refers to weapons systems that are AI-driven in their critical functions. That is weapons that process data from onboard sensors and algorithms to ‘select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention’. [3] AI-driven features in weapons systems can take many forms but depart from what someone might conventionally understand as ‘killer robots’. [4] We argue that including AI in weapons systems is important not because we seek to highlight the looming emergence of fully autonomous machines making life-and-death decisions with no human intervention, but because human control is increasingly becoming compromised in human-machine interactions.

AI-driven autonomy has already become a new reality of warfare. We find it, for example, in aerial combat vehicles such as the British Taranis, in stationary sentries such as the South Korean SGR-A1, in aerial loitering munitions such as the Israeli Harop/Harpy, and in-ground vehicles such as the Russian Uran-9. [5] The (captures these diverse systems somewhat problematic) catch-all category of autonomous weapons, a term we use as a springboard to draw attention to present forms of human-machine relations and the role of AI in weapons systems short of full autonomy.

The increasing sophistication of weapons systems arguably exacerbates trends of technologically mediated forms of remote warfare that have been around for some decades. The decisive question is how new technological innovations in warfare impact human-machine interactions and increasingly compromise human control. Our contribution aims to investigate the significance of AWS in remote warfare by discussing, first, their specific characteristics, particularly the essential aspect of distance, and, second, their implications for ‘meaningful human control’ (MHC), a concept that has gained increasing importance in the political debate on AWS. We will consider MHC in more detail further below.

We argue that AWS increases fundamental asymmetries in warfare and that they represent an extreme version of remote warfare in realizing the potential absence of immediate human decision-making on lethal force. We examine MHC, which has emerged as a core concern for states and other actors seeking to regulate AI-driven weapons systems. Here, we also contextualize the current debate with state practices of remote warfare relating to systems that have already set precedents to cede meaningful human control. We argue that these incremental practices are likely to change the use of force norms, which we loosely define as standards of action. [6] Our argument is therefore less about highlighting the novelty of autonomy, and more about how practices of warfare that compromise human control becomes accepted.

Autonomous Weapons Systems and Asymmetries in Warfare

AWS increases fundamental asymmetries in warfare by creating physical, emotional, and cognitive distancing. First, AWS increases asymmetry by creating physical distance in completely shielding their commanders/operators from physical threats or from being on the receiving end of any defensive attempts. We do not argue that the physical distancing of combatants has started with AI-driven weapons systems. This desire has historically been a common feature of warfare—and every military force should protect its forces from harm as much as possible, which some also present as an argument for remotely controlled weapons. [7] Creating an asymmetrical situation where the enemy combatant is at the risk of injury while your forces remain safe is a basic desire and aim of warfare.

But the technological asymmetry associated with AI-driven weapon systems completely disturbs the ‘moral symmetry of mortal hazard’ in combat and therefore the internal morality of warfare. [8] In this type of ‘riskless warfare, […] the pursuit of asymmetry undermines reciprocity’. [9] Following Kahn, the internal morality of warfare largely rests on ‘self-defense within conditions of reciprocal imposition of risk.’ [10] Combatants may injure and kill each other ‘just as long as they stand in a relationship of mutual risk’. [11] If the morality of the battlefield relies on these logics of self-defense, various forms of technologically mediated asymmetrical warfare deeply challenged this. It has been voiced as a significant concern in particular since NATO’s Kosovo campaign and has since grown more pronounced through the use of drones and, in particular, AI-driven weapons systems that decrease the influence of humans on the immediate decision-making of using force. [12]

Second, AWS increases asymmetry by creating an emotional distance from the brutal reality of wars for those who are employing them. While the intense surveillance of targets and close-range experience of target engagement through live pictures can create intimacy between operator and target, this experience differs from living through combat. The practice of killing from a distance triggers a sense of deep injustice and helplessness among those populations affected by the increasingly autonomous use of force who are ‘living under drones’. [13] Scholars have convincingly argued that ‘the asymmetrical capacities of Western—and particularly US forces—themselves create the conditions for increasing use of terrorism’ [14], thus ‘protracting the conflict rather than bringing it to a swifter and less bloody end’. [15]

This distancing from the brutal reality of war makes AWS appealing to casualty-averse, technologically advanced states such as the USA, but potentially alter the nature of warfare. This also connects well with other ‘risk transfer paths’ [16] associated with practices of remote warfare that may be chosen to avert casualties, such as the use of private military security companies or working via airpower and local allies on the ground. [17] We have mostly associated casualty aversion to a democratic, largely Western, ‘post-heroic’ way of war, depending on public opinion and the acceptance of using force. [18] But reports about the Russian aerial support campaign in Syria, for example, speak of similar tendencies of not seeking to put their soldiers at risk. [19] Mandel has analyzed this casualty aversion trend in security strategy as the ‘quest for bloodless war’ but, noted that warfare still and always includes the loss of lives—and that the availability of new and ever more advanced technologies should not cloud thinking about this stark reality. [20]

Some states are acutely aware of this reality as the ongoing debate on AWS at the UN Convention on Certain Conventional Weapons (UN-CCW) demonstrates. Most countries in favor of banning autonomous weapons are developing countries, which are typically less likely to attend international disarmament talks. [21] That they will speak out strongly against AWS makes their doing so even more significant. Their history of experiencing interventions and invasions from richer, more powerful countries (such as someone in favor of AWS) also reminds us they are most at risk from this technology.

Third, AWS increases cognitive distance by compromising the human ability to ‘doubt algorithms’ in terms of data outputs at the heart of the targeting process. [22] As humans using AI-driven systems encounter a lack of alternative information, allowing them to substantively contest data output, it is increasingly difficult for human operators to doubt what ‘black box’ machines tell them. Their superior data processing capacity is exactly why target identification via pattern recognition in vast amounts of data is ‘delegated’ to AI-driven machines, using, for example, machine-learning algorithms at different stages of the targeting process and in surveillance more broadly.

But we based the more target acquisition and potential attacks on AI-driven systems as technology advances, the less we seem to know about how those decisions are made. To identify potential targets, countries such as the USA (e.g. SKYNET program) already rely on meta-data generated by machine-learning solutions focusing on the pattern of life recognition. [23] However, the lacking ability of humans to retrace how algorithms decide poses a serious ethical, legal and political problem. The inexplicability of algorithms makes it harder for any human operator, even if provided a ‘veto’ or the power to intervene ‘on the loop’ of the weapons system, to question metadata as the basis of targeting and engagement decisions. Notwithstanding these issues, as former Assistant Secretary for Homeland Security Policy, Stewart Baker put it, ‘metadata’ tells you everything about somebody’s life. If you have enough metadata, you don’t need content’, while General Michael Hayden, former director of the NSA and the CIA, emphasizes that ‘ [w]e kill people based on metadata’. [24]

The desire to find (quick) technological fixes or solutions for the ‘problem of warfare’ has long been at the heart of debates on AWS. We have increasingly seen this at the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE) meetings at the UN-CCW in Geneva when countries already developing such weapons highlight their supposed benefits. Those in favor of AWS (including the USA, Australia, and South Korea) have become more vocal than ever. The USA claimed such weapons could make it easier to follow international humanitarian law by making military action more precise. [25] But this is a purely speculative argument at present, especially in complex, fast-changing contexts, such as urban warfare. Key principles of international humanitarian law require deliberate human judgments that machines are incapable of. [26] For example, the legal definition of who is a civilian and who is we do not write a combatant to be easily programmed into AI, and machines lack the situational awareness and ability to infer things necessary to make this decision. [27]

Yet, some states seem to pretend that these intricate issues are easily solvable through programming AI-driven weapons systems in just the right way. This feeds the technological ‘solutionism’ narrative that does not appear to accept that some problems do not have technological solutions because they are inherently political. [28] So, apart from whether it is technologically possible, do we want, normatively, to take out deliberate human decision-making in this way?

This brings us to our second set of arguments concerned with the fundamental questions that introducing AWS into practices of remote warfare pose to human-machine interaction.  

The Problem of Meaningful Human Control

AI-driven systems signal the potential absence of immediate human decision-making on lethal force and the increasing loss of so-called meaningful human control (MHC). The concept of MHC has become a central focus of the ongoing transnational debate at the UN-CCW. Originally coined by the non-governmental organization (NGO) Article 36[29], there are different understandings of what meaningful human control implies. [30] It promises to resolve the difficulties encountered when attempting to define precisely what autonomy in weapons systems is but also meets somewhat similar problems in its definition of key concepts. Roff and Moyes [31] suggest several factors that can enhance human control over technology: technology is supposed to be predictable, reliable, and transparent; users should have accurate information; there is timely human action and a capacity for timely intervention, as well as human accountability. These factors underline the complex demands that could be important for maintaining MHC, but how these factors are linked and what predictability or reliability, for example, are necessary to make human control meaningful remains unclear and these elements are under-defined.

Many states consider the application of violent force with no human control unacceptable and morally reprehensible. But there is less agreement about various complex forms of human-machine interaction and at what point(s) human control ceases to be meaningful. Should someone always involve humans in authorizing actions or is monitoring such actions with the option to veto and abort sufficient? Is meaningful human control realized by engineering weapons systems and AI in certain ways? Or, is a human control that comprises simply executing decisions based on indications from a computer that are not accessible to human reasoning because of the ‘black-boxed’ nature of algorithmic processing meaningful? The noteworthy point about MHC as a norm in AWS is also that it has long been compromised in different battlefield contexts. Complex human-machine interactions are not a recent phenomenon. Even the extent to which human control in a fighter jet is meaningful is questionable. [32]

However, the attempts to establish MHC as an emerging norm meant to regulate AWS are difficult. Indeed, over the past four years of debate in the UN-CCW, some states, supported by civil society organizations, have advocated introducing new legal norms to prohibit fully autonomous weapons systems, while other states leave the field open to increase their room for maneuver. As discussions drag on with little substantial progress, the operational trend toward developing AI-enabled weapons systems continues and is on track to becoming established as ‘the new normal’ in warfare (P. W. Singer 2010). For example, in its Unmanned Systems Integrated Roadmap 2013–2038, the US Department of Defence sets out a concrete plan to develop and deploy weapons with ever-increasing autonomous features in the air, on land, and at sea in the next 20 years. [33]

While the US strategy on autonomy is the most advanced, a majority of the top ten arms exporters, including China and Russia, are developing or planning to develop AI-driven weapon systems. Media reports have repeatedly pointed to the successful inclusion of machine learning techniques in weapons systems developed by Russian arms maker Kalashnikov, coming alongside President Putin’s much-publicized quote that ‘whoever leads in AI will rule the world’. [34] China has reportedly made advances in developing autonomous ground vehicles [35] and, in 2017, published an ambitiously worded government-led plan on AI with decisively increased financial expenditure. [36]

The intention to regulate the practice of using force by setting norms stalls at the UN-CCW, but we highlight the importance of reverse and likely scenario: practices shaping norms. These dynamics point to a potentially influential trajectory AWS may take towards changing what is appropriate for the use of force, also transforming international norms governing the use of violent force.

We have already seen how the availability of drones has led to changes in how states consider using force. Here, access to drone technology appears to have made targeted killing seem an acceptable use of force for some states, deviating significantly from previous understandings. [37] In their usage of drone technology, states have therefore explicitly or implicitly pushed novel interpretations of key standards of international law governing the use of force, such as attribution and imminence. These practices cannot be captured with the traditional conceptual language of customary international law if they are not openly discussed or simply do not amount to its tight requirements, such as becoming ‘uniform and widespread’ in state practice or manifesting in a consistently stated belief in the applicability of a particular rule. But these practices are significant, as they have arguably led to the emergence of a series of grey areas in international law in terms of shared understandings of international law governing the use of force. [38] The resulting lack of clarity leads to a more permissive environment for using force: we can find justifications for its use within these increasingly elastic areas of international law.

We, therefore, argue that we can study how international norms regarding using AI-driven weapons systems emerge and change from the bottom-up, via deliberative and non-deliberative practices. Deliberative practices as ways of doing things can be the outcome of reflection, consideration, or negotiation. Non-deliberative practices, in contrast, refer to operational and typically non-verbalized practices undertaken in developing, testing, and deploying autonomous technologies.

We are currently witnessing, as described above, an effort to potentially make new norms regarding AI-driven weapons technologies at the UN-CCW via deliberative practices. But, non-deliberative and non-verbalized practices are constantly undertaken as well and simultaneously shape new understandings of appropriateness. These non-deliberative practices may stand in contrast to the deliberative practices centered on attempting to plan a (consensus) norm of meaningful human control.

This has not only repercussions for systems currently in different stages of development and testing, but also for systems with limited AI-driven capabilities that have been in use for the past two to three decades, such as cruise missiles and air defense systems. Most air defense systems already have significant autonomy in the targeting process, and military aircraft have highly automatized features. [39] Arguably, non-deliberative practices surrounding these systems have already created an understanding of what meaningful human control is. There is already a norm, in the sense of an emerging understanding of appropriateness, emanating from these practices that have not been verbally enacted or reflected on. This makes it harder to deliberatively create a new meaningful human control norm.

Friendly fire incidents involving the US Patriot system can serve as an example here. In 2003, a Patriot battery stationed in Iraq downed a British Royal Airforce Tornado that had been mistakenly identified as an Iraqi anti-radiation missile. Notably, ‘[t]he Patriot system is nearly autonomous, with only the final launch decision requiring human interaction’. [40] The 2003 incident shows the extent to which even a relatively simple weapons system–comprising elements such as radar and several automated functions meant to assist human operators–deeply compromises an understanding of MHC where a human operator has all the required information to make an independent, informed decision that might contradict technologically generated data.

While humans were clearly ‘in the loop’ of the Patriot system, they lacked the required information to doubt the system’s information competently and were therefore misleading: ‘[a]ccording to a summary of a report issued by a Pentagon advisory panel, Patriot missile systems used during the battle in Iraq were given too much autonomy, which likely played a role in the accidental downing of friendly aircraft’. [41] This example should be seen in other, well-known incidents such as the 1988 downing of Iran Air flight 655 because of a fatal failure of the human-machine interaction of the Aegis system onboard the USS Vincennes or the crucial intervention of Stanislav Petrov who rightly doubted information provided by the Soviet missile defense system reporting a nuclear weapons attack. [42] A 2016 incident in Nagorno-Karabakh provides another example of a system with an autonomous anti-radar mode used in combat: Azerbaijan reportedly used an Israeli-made Harop ‘suicide drone’ to attack a bus of allegedly Armenian military volunteers, killing seven. [43] The Harop is a loitering munition able to launch autonomous attacks.

Overall, these examples point to the importance of targeting for considering the autonomy in weapons systems. There are currently at least 154 weapons systems in use where the targeting process, comprising ‘identification, tracking, prioritization and selection of targets to, sometimes, target engagement’, is supported by autonomous features. [44] The problem we emphasize here pertains not to the completion of the targeting cycle with no human intervention but already emerges in the support functions of autonomous features. Historical and more recent examples show that, here, human control is already often far from what we would consider as meaningful. It is noted, for example, that ‘[t]he S-400 Triumf, a Russian-made air defense system, can reportedly track over 300 targets and engage with over 36 targets simultaneously’. Is it possible for a human operator to meaningfully supervise the operation of such systems?

Yet, the apparent lack/compromised form of human control is acceptable: neither the use of the Patriot system has been questioned about fatal incidents nor is the S-400 contested for featuring an ‘unacceptable’ form of compromised human control. In this sense, the widely spread usage of such air defense systems over decades has already led to new understandings of ‘acceptable’ MHC and human-machine interaction, triggering the emergence of new norms.

However, questions about human control raised by these existing systems are not part of the ongoing discussion on AWS among states at the UN-CCW. States using automated weapons continue to actively exclude them from the debate by referring to them as ‘semi-autonomous’ or so-called ‘legacy systems.’ This omission prevents the international community from inspecting whether practices of using these systems are appropriate.

Conclusion

To conclude, we would like to come back to the key question inspiring our contribution: to what extent will AI-driven weapons systems shape and transform international norms governing the use of (violent) force?

In addressing this question, we should also remember who has agency in this process. Governments can (and should) decide how they want to guide this process rather than presenting a particular trajectory of the process as inevitable or framing technological progress of a certain kind as inevitable. This requires an explicit conversation about the values, ethics, principles, and choices that should limit and guide the development, role, and prohibition of certain types of AI-driven security technologies considering standards for human-machine interaction.

Technologies have always shaped and altered warfare and therefore how force is used and perceived. [45] Yet, we should not conceive of the role that technology plays in deterministic terms. Rather, technology is ambivalent, making how it is used in international relations and warfare a political question. We want to highlight here the ‘Collingridge dilemma of control’ [46] that speaks of a common trade-off between knowing the impact of technology and the ease of influencing its social, political, and innovation trajectories. Collingridge stated:

Attempting to control technology is difficult […] because, during its early stages, when it can be controlled, we can know not enough about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow. [47]

This describes the situation aptly that we find ourselves in regarding AI-driven weapon technologies. We are still at the initial development stage of these technologies. Few systems are in operation that has significant AI capacities. This makes it potentially harder to assess what the precise consequences of their use in remote warfare will be. The multi-billion investments made in various military applications of AI by, for example, the USA suggest the increasing importance and crucial future role of AI. In this context, control is decreasing and the next generation of drones is at the core of remote warfare as the practice of distance combat will incorporate more autonomous features. If technological developments proceed at this pace and the international community cannot prohibit or even regulate autonomy in weapons systems, AWS is likely to play a major role in the remote warfare of the nearer future.

We are still very much in the stage of technological development where guidance is possible, less expensive, less difficult, and less time-consuming—which is precisely why it is so important to have these wider, critical conversations about the consequences of AI for warfare now.

.


[1] Biegon, Rubrick, and Tom Watts. 2017. ‘Defining Remote Warfare: Security Cooperation.’ Oxford Research Group.
[2] Cavallaro, James, Stephan Sonnenberg, and Sarah Knuckey. 2012. ‘Living Under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan.’ International Human Rights and Conflict Resolution Clinic, Stanford Law School/NYU School of Law, Global Justice Clinic. https://law.stanford.edu/publications/living-under-drones-death-injury-and-trauma-to-civilians-from-us-drone-practices-in-pakistan/
Sauer, Frank, and Niklas Schörnig. 2012. ‘Killer Drones: The ‘Silver Bullet’ of Democratic Warfare?’ Security Dialogue, 43(4): 363–80.
Casey-Maslen, Stuart. 2012. ‘Pandora’s Box? Drone Strikes under Jus Ad Bellum, Jus in Bello, and International Human Rights Law.’ International Review of the Red Cross, 94(886): 597–625.
Gregory, Thomas. 2015. ‘Drones, Targeted Killings, and the Limitations of International Law.’ International Political Sociology, 9(3): 197–212.
Gregory, Thomas. 2015. ‘Drones, Targeted Killings, and the Limitations of International Law.’ International Political Sociology, 9(3): 197–212.
Hall, Abigail R., and Christopher J. Coyne. 2013. ‘The Political Economy of Drones.’ Defence and Peace Economics, 25(5): 445–60.
Schwarz, Elke. 2016. ‘Prescription Drones: On the Techno-Biopolitical Regimes of Contemporary ’ethical Killing.’ Security Dialogue, 47(1): 59–75.
Warren, Aiden, and Ingvild Bode. 2014. Governing the Use-of-Force in International Relations. The Post-9/11 US Challenge on International Law. Basingstoke: Palgrave Macmillan.
Gusterson, Hugh. 2016. Drone: Remote Control Warfare. Cambridge, MA/London: MIT Press.
Restrepo, Daniel. 2019. ‘Naked Soldiers, Naked Terrorists, and the Justifiability of Drone Warfare:’ Social Theory and Practice, 45(1): 103–26.
Walsh, James Igoe, and Marcus Schulzke. 2018. Drones and Support for the Use of Force. Ann Arbor: University of Michigan Press.
[3] ICRC. 2016. ‘Views of the International Committee of the Red Cross (ICRC) on Autonomous Weapon Systems.’ https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system
[4] Sparrow, Robert. 2007. ‘Killer Robots.’ Journal of Applied Philosophy, 24(1): 62–77.
[5] Boulanin, Vincent, and Maaike Verbruggen. 2017. ‘Mapping the Development of Autonomy in Weapons Systems.’ Stockholm: Stockholm International Peace Research Institute. https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf
[6] Bode, Ingvild, and Hendrik Huelss. 2018. ‘Autonomous Weapons Systems and Changing Norms in International Relations.’ Review of International Studies, 44(3): 393–413.
[7] Strawser, Bradley Jay. 2010. ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.’ Journal of Military Ethics, 9(4): 342–68.
[8] Fleischman, William M. 2015. ‘Just Say “No!” To Lethal Autonomous Robotic Weapons.’ Journal of Information, Communication and Ethics in Society, 13(3/4): 299–313.
[9] Kahn, Paul W., 2002. ‘The Paradox of Riskless Warfare.’ Philosophy and Public Policy Quarterly, 22(3): 2–8.
[10] Id., 4.
[11] Id., 3.
[12] Der Derian, James. 2009. Virtuous War: Mapping the Military-Industrial-Media-Entertainment Network. 2nd ed. New York: Routledge.
[13] Cavallaro, James, Stephan Sonnenberg, and Sarah Knuckey. 2012. ‘Living Under Drones: Death, Injury and Trauma to Civilians from US Drone Practices in Pakistan.’ International Human Rights and Conflict Resolution Clinic, Stanford Law School/NYU School of Law, Global Justice Clinic. https://law.stanford.edu/publications/living-under-drones-death-injury-and-trauma-to-civilians-from-us-drone-practices-in-pakistan/
[14] Kahn, Paul W., 2002. ‘The Paradox of Riskless Warfare.’ Philosophy and Public Policy Quarterly, 22(3): 2–8.
[15] Sauer, Frank, and Niklas Schörnig. 2012. ‘Killer Drones: The ‘Silver Bullet’ of Democratic Warfare?’ Security Dialogue, 43(4): 363–80.
Kilcullen, David, and Andrew McDonald Exum. 2009. ‘Death From Above, Outrage Down Below.’ The New York Times. 17 May.
Oudes, Cor, and Wim Zwijnenburg. 2011. ‘Does Unmanned Make Unacceptable? Exploring the Debate on Using Drones and Robots in Warfare.’ IKV Pax Christi.
[16] Sauer, Frank, and Niklas Schörnig. 2012. ‘Killer Drones: The ‘Silver Bullet’ of Democratic Warfare?’ Security Dialogue, 43(4): 369.
[17] Biegon, Rubrick, and Tom Watts. 2017. ‘Defining Remote Warfare: Security Cooperation.’ Oxford Research Group.
[18] Scheipers, Sibylle, and Bernd Greiner, eds. 2014. Heroism and the Changing Character of War: Toward Post-Heroic Warfare? Houndmills: Palgrave Macmillan.
Kaempf, Sebastian. 2018. Saving Soldiers or Civilians? Casualty-Aversion versus Civilian Protection in Asymmetric Conflicts. Cambridge: Cambridge University Press.
[19] The Associated Press. 2018. ‘Tens of Thousands of Russian Troops Have Fought in Syria since 2015.’ Haaretz. 22 August. https://www.haaretz.com/middle-east-news/syria/tens-of-thousands-of-russian-troops-have-fought-in-syria-since-2015-1.6409649
[20] Mandel, Robert. 2004. Security, Strategy, and the Quest for Bloodless War. Boulder, CO: Lynne Rienner Publishers.
[21] Bode, Ingvild. 2019. ‘Norm-Making and the Global South: Attempts to Regulate Lethal Autonomous Weapons Systems.’ Global Policy, 10(3): 359–364.
[22] Amoore, Louise. 2019. ‘Doubtful Algorithms: Of Machine Learning Truths and Partial Accounts.’ Theory, Culture and Society, 36(6): 147–169.
[23] The Intercept. 2015. ‘SKYNET: Courier Detection via Machine Learning–The Intercept.’ 2015. https://theintercept.com/document/2015/05/08/skynet-courier/
Aradau, Claudia, and Tobias Blanke. 2018. ‘Governing Others: Anomaly and the Algorithmic Subject of Security.’ European Journal of International Security, 3(1): 1–21. https://doi.org/10.1017/eis.2017.14
[24] Cole, David. 2014. ‘We Kill People Based on Metadata.’ The New York Review of Books (blog). 10 May. https://www.nybooks.com/daily/2014/05/10/we-kill-people-based-metadata/
[25] The United States. 2018. ‘Human-Machine Interaction in the Development, Deployment, and Use of Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. UN Document CCW/GGE.2/2018/WP.4.’ https://www.unog.ch/80256EDD006B8954/(httpAssets)/D1A2BA4B7B71D29FC12582F6004386EF/$file/2018_GGE+LAWS_August_Working+Paper_US.pdf
[26] Asaro, Peter. 2018. ‘Why the World Needs to Regulate Autonomous Weapons, and Soon.’ Bulletin of the Atomic Scientists (blog). 27 April. https://thebulletin.org/landing_article/why-the-world-needs-to-regulate-autonomous-weapons-and-soon/
Sharkey, Noel. 2008. ‘The Ethical Frontiers of Robotics.’ Science, 322(5909): 1800–1801.
[27] Sharkey, Noel. 2010. ‘Saying ‘No!’ To Lethal Autonomous Targeting.’ Journal of Military Ethics, 9(4): 369–83.
[28] Morozov, Evgeny. 2014. To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems That Don’t Exist. London: Penguin Books.
[29] Article 36. 2013. ‘Killer Robots: UK Government Policy on Fully Autonomous Weapons.’ http://www.article36.org/weapons-review/killer-robots-uk-government-policy-on-fully-autonomous-weapons-2/
Roff, Heather M., and Richard Moyes. 2016. ‘Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons. Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems. UN Convention on Certain Conventional Weapons.’
[30] Ekelhof, Merel. 2019. ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation.’ Global Policy, 10(3): 343–348. https://doi.org/10.1111/1758-5899.12665
[31] Roff, Heather M., and Richard Moyes. 2016. ‘Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons. Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems. UN Convention on Certain Conventional Weapons.’
[32] Ekelhof, Merel. 2019. ‘Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation.’ Global Policy, 10(3): 343–348. https://doi.org/10.1111/1758-5899.12665
[33] US Department of Defense. 2013. ‘Unmanned Systems Integrated Roadmap: FY2013-2038.’ https://info.publicintelligence.net/DoD-UnmannedRoadmap-2013.pdf
[34] Busby, Mattha. 2018. ‘Killer Robots: Pressure Builds for Ban as Governments Meet.’ The Guardian, 9 April 9. sec. Technology. https://www.theguardian.com/technology/2018/apr/09/killer-robots-pressure-builds-for-ban-as-governments-meet.
Vincent, James. 2017. ‘Putin Says the Nation That Leads in AI ‘Will Be the Ruler of the World.’’ The Verge. 4 September. https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world
[35] Lin, Jeffrey, and Peter W. Singer. 2014. ‘Chinese Autonomous Tanks: Driving Themselves to a Battlefield Near You?’ Popular Science. 7 October. https://www.popsci.com/blog-network/eastern-arsenal/chinese-autonomous-tanks-driving-themselves-battlefield-near-you
[36] Metz, Cade. 2018. ‘As China Marches Forward on A.I., the White House Is Silent.’ The New York Times. 12 February. sec. Technology. https://www.nytimes.com/2018/02/12/technology/china-trump-artificial-intelligence.html.
Kania, Elsa. 2018. ‘China’s AI Agenda Advances.’ The Diplomat. 14 February. https://thediplomat.com/2018/02/chinas-ai-agenda-advances/
[37] Haas, Michael Carl, and Sophie-Charlotte Fischer. 2017. ‘The Evolution of Targeted Killing Practices: Autonomous Weapons, Future Conflict, and the International Order.’ Contemporary Security Policy, 38(2): 281–306.
Bode, Ingvild. 2017. ‘”Manifestly Failing” and “Unable or Unwilling” as Intervention Formulas: A Critical Analysis.’ In Rethinking Humanitarian Intervention in the 21st Century, edited by Aiden.
Warren, Aiden, and Ingvild Bode. 2014. Governing the Use-of-Force in International Relations. The Post-9/11 US Challenge on International Law. Basingstoke: Palgrave Macmillan.
[38] Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016. Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge: Cambridge University Press.
[39] Boulanin, Vincent, and Maaike Verbruggen. 2017. ‘Mapping the Development of Autonomy in Weapons Systems.’ Stockholm: Stockholm International Peace Research Institute. https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf
[40] Missile Defense Project. 2018. ‘Patriot.’ Missile Threat. https://missilethreat.csis.org/system/patriot/
[41] Singer, Jeremy. 2005. ‘Report Cites Patriot Autonomy as a Factor in Friendly Fire Incidents.’ SpaceNews.Com. 14 March. https://spacenews.com/report-cites-patriot-autonomy-factor-friendly-fire-incidents/
[42] Aksenov, Paul. 2013. ‘Stanislav Petrov: The Man Who May Have Saved the World.’ BBC Russian. September.[43] Gibbons-Neff, Thomas. 2016. ‘Israeli-Made Kamikaze Drone Spotted in Nagorno-Karabakh Conflict.’ The Washington Post. 5 April. https://www.washingtonpost.com/news/checkpoint/wp/2016/04/05/israeli-made-kamikaze-drone-spotted-in-nagorno-karabakh-conflict/?utm_term=.6acc4522477c
[44] Boulanin, Vincent, and Maaike Verbruggen. 2017. ‘Mapping the Development of Autonomy in Weapons Systems.’ Stockholm: Stockholm International Peace Research Institute. https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf
[45] Ben-Yehuda, Nachman. 2013. Atrocity, Deviance, and Submarine Warfare. Ann Arbor, MI: University of Michigan Press. https://doi.org/10.3998/mpub.5131732
[46] Genus, Audley, and Andy Stirling. 2018. ‘Collingridge and the Dilemma of Control: Towards Responsible and Accountable Innovation.’ Research Policy, 47(1): 61–69.
[47] Collingridge, David. 1980. The Social Control of Technology. London.

.

Scroll to Top