Cyborgs, Neuro Weapons, and Network Command

Remy Maduit | Authors published

DEFENSE & SECURITY FORUM

Cyborgs, NeuroWeapons, and Network Command

Katrine Nørgaard is a Ph.D. candidate at Royal Danish Defense College, Denmark.
Michael Linden-Vørnle is an Astrophysicist and Chief Adviser at the National Space Institute (DTU Space)
of the University of Copenhagen, Denmark.

Volume I, Issue 1, 2022
Defense & Security Forum
a Mauduit Study Forums’ Journal
Remy Mauduit, Editor-in-Chief

Nørgaard, Katrine, & Linden-Vørnle, Michael (2021) Cyborgs, Neuroweapons, and Network Command, Scandinavian Journal of Military Studies, DOI: 10.31374/sjms.86.

ARTICLE INFO

Article history

Keywords
Neurotechnology
weapons’ systems 
cyborg ethics 
military command and control

ABSTRACT
In this article, we will explore the emerging field of military neurotechnology and the way it challenges the boundaries of war. We argue that we can use these technologies not only to enhance the cognitive performance of warfighters but also to exploit artificial intelligence in autonomous and robotic weapons systems. This, however, requires the practice of a collaborative network command and a governing framework of cyborg ethics to secure human control and responsibility in military operations. The discussion of these governing principles adheres to the tradition of military studies. Hence, we do not aim to present a neuroscientific research program. Nor do we wish to embark on technical solutions in disciplines such as artificial intelligence and robotics. Rather, the intention is to make the highly specialized language of these sciences accessible to an audience of military practitioners and policymakers, bringing technological advances and challenges into the discussion of future warfighting.

“It is currently estimated that AI and robotic systems will be ubiquitous across the operational framework of 2035.”

Are we on the verge of a robotic revolution in military affairs? Will intelligent machines take control of the future battlefield and replace human warfighters? Recent advances in military neurotechnologies, robotics, and artificial intelligence (AI) have evoked the transgressive image of the ‘cyborg warrior’, a weaponized brain-computer network powered by AI and neurocognitive augmentation. In the wake of these emergent military technologies, some of our most fundamental assumptions and definitions of human intelligence, autonomy, and responsibility have been challenged. These concepts are central to our understanding of the lawful and ethical conduct of war. They are also closely associated with human agency and the ability to make context-dependent decisions and critical evaluations in matters of life and death. The question that begs to be answered is whether—and how — we can apply these concepts to cyborg systems that, per definition, are not entirely human? What kind of military capacity is a cyborg warrior? A warfighter or a weapons system? A human or a machine? In the following, we argue that the cyborg warrior is neither a human subject nor a piece of military hardware, but a heterogeneous assemblage—or rather a ‘nexus’—of human and non-human capacities, transmitting and decoding streams of information in military battle networks. We prefer to talk about cyborgs and neurocognitive weapons systems, stressing the intrinsic entanglement of human and artificial intelligence and challenging traditional human-machine distinctions and dichotomies.

Until recently, most people believed cyborg warfare to be purely science-fictional. Indeed, it is hard to imagine anything darker and more unsettling than a robotic army, enhanced by AI, commanding the future battlefield. [i] However, the application of cyborg technologies in military operations is not merely a futuristic fantasy. Today, advanced brain-computer interfaces are customized to the personal helmet (the ‘wearable cockpit’) used by F-35 fighter pilots and constitute standard applications in a variety of head-mounted displays (HMD) used by warfighters in both training and tactical combat scenarios. Recently, neuroscientific progress has been made in areas such as neurointelligence (intelligence fusion and predictive analytics), neurocognitive enhancement of warfighters (adaptive and interactive brain-computer interfaces), and neuroweaponry (target recognition, coordination, and control of weapons systems)), using AI as human decision support and cognitive enhancement. [ii]

However, as these emerging technologies evolve, growing concerns are raised about how they will affect the future of military command and control (C2), including the legal and ethical implications of weaponized neurocognitive systems. Since AI plays a significant role in advanced neuroweaponry, many of these considerations coincide with the insecurities introduced by the military use of AI and so-called ‘killer robots’ [iii]: Can autonomous robotic systems be held accountable for their actions? Will they be able to comply with the legal and ethical conventions of International Humanitarian Law? Can they distinguish between combatants and non-combatants in a highly dynamic and cluttered operational environment? These issues remain contested [iv] and are, so to speak, ‘built’ into AI-enhanced cyborg weapons systems, challenging existing legal frameworks and moral values.

Addressing these concerns demands a closer look at the problem these technologies are supposed to solve. As stated by neuroscientist James Giordano, the deployment of AI and neurocognitive systems in military battle networks is a response to the increasing amount of real-time data in the operational environment and to the challenges of an omnipresent information overload that exceeds the limitations of human cognitive capacities. [v] In the following, we argue that the hybridization of human and artificial intelligence in cyborg weapons systems not only enhances the cognitive performance of warfighters. It also presents a way to leverage increased autonomy in intelligent and unmanned systems while simultaneously keeping humans ‘in the loop’, applying legal and ethical judgment and context-sensitive protocols of war in military operations.

We argue that AI-enabled brain-computer networks have the potential to reconfigure the classical hierarchical structure of military command, prompting a shift to a more collaborative and flexible network command regime. This requires the practice of a new form of ‘network command responsibility’ and a reflexive form of ‘jurisprudence’ that determines questions of accountability and liability in military operations, such as: Which circumstances could warrant the use of neurocognitive weapons systems? And who can ultimately be held responsible for decisions and actions performed by cyborg warriors?

Given the relative nascence of neuroscience and technology, many of these issues are still speculative. Yet the pace of progress in AI-based neural interfaces and the ‘need for speed’ in military command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) will continue to push cyborg technologies and neuroethical considerations to the front.

In this article, we will address the transgressive nature of cyborg weapons systems and the way they shape the human perception and conduct of war. Adopting a socio-technical and constructivist approach to technological mediation and co-production of risk, we do not make a neuroscientific study. Neither do we develop technical solutions for human-machine symbiosis. Instead, we will introduce some basic concepts and definitions of cyborg and neurocognitive weaponry that will allow for critical debate on the emerging domain of ‘neurospace’ and the human enhancement of warfighters. [vi] Although the underpinnings of this discussion are highly technical, the article turns to military studies rather than neurocognitive and computer sciences.

The empirical basis of our discussion is drawn from a variety of military technology assessments such as the NATO Science and Technology Organization (NATO STO) trends report 2020–2040[vii] and the landmark report on Emerging Cognitive Neuroscience and Related Technologies (2008) published by the ad hoc Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/Neural Research in the Next Two Decades (National Research Council of the National Academy of Sciences). For the sake of simplification, we will refer to this text as the NAS 2008 report. [viii] To place ourselves firmly in the current operational framework of Multi-Domain Operations (MDO), we also draw extensively on the white paper on “Operationalizing Robotic and Autonomous Systems in Support of Multi-Domain Operations” by the Army Capabilities Center–Future Warfare Division (2018), referred to as the RAS MDO white paper. With this empirical ‘double vision’, we set out to explore the intersection of neuroscience, robotics, and military command.

The article falls into four parts: the first part presents some basic concepts and definitions of cyborg technologies and neuroweapons as part of the emerging neuroscientific security discourse. The second part of the article sets the general framework and context of multi-domain warfare in which these technologies are shaped and applied as military capabilities. The third part introduces the concepts of ‘collaborative risk mediation’ and ‘composite intentionality’ stressing the mutual entanglement and ‘interference’ of human and artificial intelligence in the emerging domain of neurospace. In the last part of the article, we address the urgent need for governing principles and guidelines, including the legal and ethical aspects of cyborg warfare. Thus, we call for an interdisciplinary discussion of the emergent frontiers and practices of neurospace and the negotiation of neuroethical standards in the international security community. At the center of these discussions, we pose the question of ‘meaningful human control’ and responsibility in networked military command.

The Neuroscientific Security Discourse and the Realm of the Cyborg Warrior

As the first step in our inquiry, we need to distinguish between neurotechnology, which is used to detect, affect, and target human brain activity (e.g.: improve, repair, degrade or manipulate cognitive skills), on the one hand, and AI, which is used in computers, sensors, and robotic systems. A ‘neural network’ is a specific form of AI, consisting of a set of algorithms resembling the working human brain. A ‘neuron’ in a neural network is a mathematical function that collects and classifies information according to a specific architecture. [ix] Neurocognitive or cyborg networks are hybrid systems of human and artificial intelligence, i.e. brain-computer networks that integrate the cognitive advantages of humans and computers. For many years, the two sciences, the science of the human brain and the science of AI, have developed side by side, mutually inspiring and informing each other. Now, the scientific exploration of neurotechnology and AI are rapidly converging and speeding up the development of neural feedback systems that allow a two-way communication stream between the human brain and the computer. The convergence of AI and neurotechnology and the implications of integrating, not just combining or ‘teaming’ human and machine cognition, is the focus of our interest. [x] Humans and computers work together everywhere. This is not new. However, until recently they have done to separate entities. This separation is eroding, as ubiquitous AI and neurotechnological advances have distinguished between human and machine cognition unclear and sometimes even obsolete. When we refer to ‘cyborg and neurocognitive weapons systems’, and not just one or the other, it is precisely because we want to stress this increasing interference of human and non-human cognition, which goes way beyond—and has to be distinguished from—other hybrid technologies such as bionic limbs and advanced hearing or visual aids.

For this same reason, it is important not to confuse the notion of the cyborg warrior with the concept of the ‘centaur warfighter’, which is often used as a metaphor for human-machine teaming. [xi] The two concepts are closely related, but not synonymous. We can express this distinction as the difference between integration and automation of machine intelligence, perception, and reasoning. Whereas centaur human-machine teaming comprises humans plus machines, with machines performing demarcated automated functions, the cyborg warrior functions as a neurally enhanced and integrated system architecture, [xii] merging human and machine cognition. Centaur human-machine teaming does not imply cognitive or sensory enhancement of the human operator. Human and machine cognition are not neurally integrated. Instead, humans and machines perform different role-specific tasks that are largely based on predetermined decision models where the machine’s role is conditioned by one or more rule sets. [xiii]

As opposed to centaur human-machine teaming, cyborgs have no pre-programmed role specifications but adapt continuously to shifting situations and demands in the operational environment. According to Kline and Clynes [xiv], such systems can be regarded as ‘cybernetic organisms’ (i.e. cyborgs) in that they entail both natural and artificial systems that are functional, portable, and/or biologically integrated. [xv] Cybernetic and cyborg we can see systems as “sophisticated distributed human-machine networks, such as integrated software or robotic augmentations to human-controlled activity, that would fuse and coordinate the distinct cognitive advantages of humans and computers”. [xvi] Cyborg technologies used in a networked risk environment will “reflect a combination of autonomous initiative and original problem solving by both human and machine. This means shared agency and responsibility in military decisions”. [xvii]

The attribution of shared agency and responsibility to humans and machines is central to the definition of cyborg and neurocognitive weapons systems and demarcates a shift from automated decision support to collaborative information and risk management, with human and machine intelligence mediating and co-shaping the perception, organization, and distribution of risk. The advantage of such systems is increased flexibility and accountability, ensuring human judgment and responsibility for engagements while simultaneously leveraging the precision and speed of AI. This becomes urgent when cyborg technologies are used as offensive weapons systems. [xviii]

Whenever neurocognitive systems are used as weapons (either defensive or offensive) against an opponent, they are broadly classified as ‘neuroweapons’. Traditionally, a weapon is defined as “a means of contending against another” and “something used to injure, defeat, or destroy”). [xix] As stated by neuroscientists Rachel Wurzman and James Giordano, both definitions apply to neurotechnologies used as weapons in intelligence and/or defense scenarios:

“Neurotechnology can support intelligence activities by targeting information and technology infrastructures, to either enhance or deter accurate intelligence assessment, the ability to efficiently handle amassed, complex data, and human tactical or strategic efforts. The objectives for neuroweapons in a traditional defense context (e.g. combat) may be achieved by altering (i.e. either augmenting or degrading) functions of the nervous system, to affect cognitive, emotional, and/or motor activity and capability.” [xx]

However, neuroweapons are inherently ambiguous and elusive systems that defy easy explanations and definitions. A clear-cut and authoritative definition does not exist, and different security actors negotiate differences in core components, structure, design, and purpose in academia, industry, military, and national civil services. [xxi] A significant problem is the “amount of pseudoscientific information and journalistic oversimplification related to cognitive science”. [xxii] Definitions of neuroweaponry are too broad or too narrow to be useful for critical evaluation. An attempt to launch a comprehensive definition has been made by intelligence analyst Robert McCreight, who proposes that:

“Neuroweaponry encompasses all forms of interlinked cybernetic, neurological, and advanced biotech systems, along with the use of synthetic biological formulations and merged physiobiological and chemical scientific arrangements, designed expressly for offensive use against human beings.” [xxiii]

The problem here is that the definition itself becomes so abstract that it needs a translation to apply. Instead, for our purpose, we will use a more pragmatic definition: Neuroweapons include any kind of neurotechnological agent, drug, or device designed to either enhance or deter the cognitive performance of warfighters and target intelligence and command structures as both non-kinetic and kinetic weapons. They can influence, shape, augment, or restrict human perception and decision-making. With these generic properties as a very broad characterization of neuroweaponry, we can classify the cyborg warrior as a specific neurocognitive weapons system, i.e. a certain class of neuroweapons, using AI as cognitive enhancement in hybrid brain-computer networks.

This preliminary outline of cyborg and neurocognitive weaponry is rather far from the popular Terminator sci-fi edition. They are not autonomous ‘killer robots’. Instead, they appear in the shape of networked assemblages with multiple operators, sensors, computers, and platforms combining cybersystems and brain functions. Thus, the cyborg warrior is neither a human subject nor an autonomous robot, but an augmented and distributed system architecture, a hybrid man-machine network that integrates artificial and human cognition in military mission planning and control. This means that a new domain beyond cyber and space must be added to the existing definition of the multi-domain battle space. Following McCreight, we adopt the concept of neurospace [xxiv] to demarcate the emergence of a new strategic frontier of multi-domain warfare performed by networked humans and computers. “the new battle space is the brain itself”. [xxv]

With this conceptual framework, we wish to emphasize the transgressive nature of cyborg systems and the way they are presented as a matter of national security. The security discourse of neuro-enhanced weapons systems concerns both the opportunities and the risks of disruptive neurotechnologies. Different security actors hold different and competing views on neuroweapons. Some scholars warn against the dangers of weaponized and ungoverned cybernetic systems that target the human mind. [xxvi] Others stress the benefits of neuro-enhanced capabilities that maximize soldier performance in intelligence operations, support military decision-making, and increase the return of investment in unmanned, AI-based and robotic systems.[xxvii] This way, we find two neuroscientific narratives securitizing the realm of the cyborg warrior: On the one hand, we find a narrative of the ‘dark side’ of unregulated ‘neurowarfare’ with globally networked, self-learning machines taking control over human life and death. This could be called the ‘neuroskeptic’ narrative. [xxviii] We find a narrative of AI-enabled, cognitive augmentation, and ‘decision superiority’ that strengthens situational awareness, enhances warfighter performance, and integrates effects across multiple domains of operation. This could be called the ‘neurooptimistic’ narrative. Both narratives draw on and contribute to the emergence of a neuroscientific network discourse that securitized the boundaries of neurospace. However, these boundaries are inherently unstable and constantly renegotiated and reconstructed as temporary regulations and arrangements on the battlefield. These arrangements include both people and technologies, military doctrines, legal and ethical conventions, technical specifications, and political programs that perform and co-shape the neurospace as a distinct domain of operation.

This understanding radically alters the way we normally perceive the relationship between humans and technology and challenges the existing norms and boundaries of warfare. Instead of using a classical binary distinction between humans and machines, we see them as collaborative risk mediators, co-shaping and co-performing mission planning and execution. This involves a shift from a ‘human-centric’ understanding of intelligence and agency to a distributed (non-hierarchical) network model acknowledging the interconnectedness of humans and technology. [xxix]

Distributed Cognitive Networks and Multi-Domain Warfare

According to Wurzman and Giordano, there is a significant utility for weaponized neurotechnologies and cyborgs in contemporary warfare, where threat environments are “asymmetric, amorphous, complex, rapidly changing, and uncertain” and “require greater speed and flexibility”. [1] This view is supported by the general characterization of multi-domain warfare and the pursuit of ‘game-changing’ military technologies needed to ensure success on the battlefield. Thus, it is commonly maintained that battlefield success depends on the ability to operate in an increasingly networked, accelerated, and information-intensive security environment. [2] It is widely believed that the proliferation and use of information technologies on the battlefield make it vital to maintain “superiority in the generation, manipulation and use of information”, i.e. ‘information dominance’, to secure ‘decision superiority’–deciding better and faster than adversaries. [3]

This information-driven approach to the future battlefield is expressed in the RAS MDO white paper (2018), stating that “[t]he future force requires the ability to collect, assess, analyze, and fuse data through the employment of AI”. Referring to the MDO concept, the white paper describes how advanced networks of humans and intelligent machines can be used to outmaneuver enemy forces and counter their Anti-Access/Area Denial (A2/AD) capabilities across domains (land, air, maritime, space, and cyberspace), the electromagnetic (EM) spectrum, and the information environment. The key to successful battle management in MDO, according to the white paper, is networked artificial and human intelligence:

“Artificial intelligence agents and algorithms will enable future force operations by processing, exploiting, and disseminating intelligence and targeting data. Operating forces will use AI to cue sensors and integrate cross-domain fires; reduce a staff’s cognitive load while simultaneously enabling a commander’s decisions at the pace of battle; and manage airspace, networks, and robotic and autonomous systems” RAS MDO white paper.

To support this view, organizations like the Defense Advanced Research Project Agency (DARPA) have started neurotechnological research projects that examine advanced signal-processing techniques for real-time coding of neural patterns to improve military decision-making and predictive analytics. [4] This includes neural interfaces and sensor designs that interact with the central and peripheral nervous system using nanoneuroscience, neuroimaging, and cyber-neuro systems. [5] These technologies provide techniques and tools that assess, access, and target neural systems and can affect the cognitive, emotional, and behavioral aspects of human performance in military operations. [6]

The general assumption in these assessments is that neurotechnological progress will gain significant importance and impact as a ‘force multiplier’ in the future battlefield, “in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone”. [7] Similarly, it is concluded that intelligent, distributed human-machine networks will assist human operators in advanced sensor grids and intelligence analyst workbenches, coordination of joint or coalition operations, logistics, and information assurance. [8] This would allow future forces to understand the operational environment in real-time, increase speed and situational awareness, lighten the warfighters’ cognitive workload, leverage autonomous and robotic systems, and converge capabilities across all domains of operation.

These assessments are key elements in the neuroscientific security discourse and draw heavily on the neuro-optimistic narrative of ‘decision superiority’ in multi-domain warfare. However, looking at the flip side of the neuroscientific imaginary, we find a competing narrative stressing the vulnerability of cyborg systems to cyberattacks, intrusion, and manipulation of information by enemy forces, threatening the core functions of the network and the security of the human operators. How can they be protected from ‘neural malware’ infecting the network? If humans and computers are neurally networked, can human operators be ‘hacked’ or even controlled by enemy governments, terrorists, or cybercriminals? What would a ‘neural attack’ look like, and how could it be detected? Would we be entering an era of cyborg flash wars, occurring at machine speed, far beyond the limits of human perception? These are unsettling—and unanswered—questions raised by neuroskeptics and opponents of neurowarfare.

To get a better understanding of these challenges, we will have to inspect the human-machine interface and the way human and artificial intelligence mutually shape and mediate the perception of the operational environment. We want to explore how these transgressive technologies mediate human and machine cognition, and how they shape the realm of the cyborg warrior.

Brain-Computer Networks and Collaborative Risk Management

Following the seminal report on Emerging Cognitive Neuroscience and Related Technologies, [9] the basis of neurocognitive technologies and brain-computer networks is the capture and visualization of various forms of energy emissions from the working brain. This visualization is achieved through functional neuroimaging devices [10] i.e. devices that present digital images of neural activity in the human brain, e.g. fMRI or EEG (see Figure 1).

Original | PPT

Figure 1: functional Magnetic Resonance Imaging (fMRI).

Neuroimagery can detect and classify human cognitive states, such as fatigue or mental and sensory overload, in real time by measuring changes in brain activity. By visualizing different brain activity, neuroimaging technologies offer different windows onto complex neural processes, often intending to understand the relationship between regional neural activity and specific tasks, stimuli, cognition, and behavioral patterns. [11] The detection, classification, and interpretation of specific patterns of neural activity can be conducted by machine learning through advanced signal processing and pattern recognition allowing a bidirectional transmission of information between human and machine. While the development of neuroimaging technologies and self-learning algorithms forms the basis of advanced brain-computer interfaces and augmented sensory capacities (e.g. visual and auditory enhancement), direct neural enhancement of the human brain is still at an early stage of development and unlikely to be available before 2050. [12] Nevertheless, according to NATO’s tech trends 2020–2040 report, cognitive enhancement based on bidirectional data transfer and mesh networks is a real possibility. [13] As recent developments in DARPA’s Augmented Cognition Program show, functional neuroimaging technology combined with machine learning and AI can control and communicate with unmanned and remotely piloted systems, allowing efficient searching and encyclopedic access to information. [14] This requires an efficient process of neural decoding and translation between the human brain and the computer either via invasive neurotechnological implants (nano transducers) or non-invasive external devices (see Figure 2).

Original | PPT

Figure 2: Two technical areas: Non-invasive and minutely invasive neurotechnologies. Source: https://www.darpa.mil/attachments/2EmondiPresentationPDFversion.pdf.

As an example, DARPA has been working on non-invasive brain-computer interfaces that use the human visual system as the input device to a computer system to increase the speed of data processing in visual search mode. [15] In DARPA’s Next-Generation Non-Surgical Neurotechnology (N3) Program, the goal is to create reliable neural interfaces with no surgery. [16] Instead of invasive brain implants, the brain-computer interface is designed as a wearable, head-mounted device (cap, helmet, or visor) that transmits electrical signals from the brain to the computer and back to the operator in a closed-loop, 8-bidirectional feedback system (see Figure 3). The brain signals are picked up by sensors in the wearable interface, analyzed and translated by AI, and sent back as an output signal to the human operator, for instance as a list of alternative options to engage a target or coordinate data streams from other platforms or networked weapons systems. Ultimately, it is envisioned that adaptive neuro-feedback systems could help to develop and evaluate targeting data, create layered options, enable cross-domain synergy, and exploit opportunities in time-sensitive environments. At this stage of development, these interfaces would primarily be suited for analysts and operators in military reach-back facilities and headquarters that provide a relatively stable and controlled environment. However, a better understanding of closed-loop and adaptive neuro-feedback systems will be necessary to improve systems design and maximize human performance while simultaneously avoiding mental or cognitive overload in operators and intelligence analysts.

Original | PPT

Figure 3: The closed-loop brain-computer cycle as illustrated by van Gerven et al. (2009).

According to several scholars, this requires a shift from a human-centric model of intelligence and agency to a network model that involves the interconnectedness of humans and intelligent systems in advanced AI-based networks. [17] Whereas early intelligent systems were like disembodied entities (often caricatured as a floating brain in a glass jar hooked up with a bunch of electric wires), networked cyborg systems is embodied technologies that sense and interact with the environment in numerous ways through both human and non-human sensors and operators. This ‘embodiment’ of networked technologies is perhaps the most radical and transgressive property of cyborg weapons systems. It is also the distinct quality that makes cyborg weapons systems something else and far more than just another piece in the military toolkit. They cannot be adequately understood as isolated components or pieces of military equipment. More profoundly, we contend, they can be understood as neurocognitive assemblages that continuously translate and mediate human and machine perception and agency.

Neuroscientists Dylan Schmorrow and Amy Kruse have described the mediation of human and machine perception as ‘closed-loop augmented cognition’, based on Human-in-the-Loop System Adaptation–or simply neuro-feedback. [18] In adaptive closed-loop systems, the brain-computer feedback process starts with the operator engaging in a cognitive task while receiving possible stimuli (e.g. visual or sensory input). As shown by Marcel van Gerven et al. [19], the neural activity of the human operator is detected by sensors and processed by the computer (see Figure 3). AI, which generates and transmits an output signal directly to the brain, predicts an outcome via an external interface. The output signal can be presented in multiple forms and modalities, such as text, auditory input, motor commands (e.g. controlling prosthetics or unmanned systems), or graphical and vibrotactile representations of brain activity. [20] The decision cycle is closed by the operator perceiving the output, which allows an evaluation and adaptation of the feedback process. While iterating through the cycle, both the operator and the computer may learn to adapt, increasing the cognitive performance of the overall system. [21]

This adaptive system approach blurs the distinction between human and artificial intelligence and attributes agency and decision authority to both humans and machines in hybrid system architectures. [22] This includes complex collaborative tasks, such as target recognition, threat analysis, mission planning, and intelligence fusion.

In contrast to stand-alone AI and machine learning algorithms, adaptive brain-computer networks with humans ‘in the loop’ can respond to unforeseen changes and exercise discretionary judgment in mission planning and control of operations. This is essential in complex and time-sensitive tasks such as dynamic targeting, [23] where the prioritization of targets can change in an instant depending on operational circumstances. This form of technological mediation and agency not only supports human decision-making. It reshapes and speeds up the entire ‘OODA loop’ [24] in the military decision cycle. The network approach to technological mediation and agency recognizes that integrated artificial and human cognition is crucial for the conduct of missions in which functions of speed, amount of information, and synchronization might overwhelm human decision-making. Thus, neurocognitive weapons systems and cyborg technologies are not just cooperative in the sense of ‘team members’ or robotic assistants interacting with and enabling human operators to perform ‘dirty, dull, and dangerous’ tasks. They are not just ‘intelligent tools’ projecting human intention and agency. Rather, they should be understood as collaborative risk mediators that actively co-shape and mediate the perception, evaluation, and communication of risk in MDOs. In this perspective, risk management results from shared human-machine cognition and the coproduction of critical decisions in military C4ISR networks. This means that decision-making is seen as a joint effort of human beings and intelligent technologies. In the words of Peter-Paul Verbeek, it is an: “… inherently hybrid affair, involving both human and non-human intentions, or better ‘composite intentions’ with intentionality distributed over the human and non-human elements in human-technology-world relationships. Rather than being derived from human agents, this intentionality comes about in associations between humans and non-humans. For that reason, it could be called ‘hybrid intentionality’”. [25]

This leads to something beyond a conventional brain-computer interface–the goal is not merely to control external devices by interfacing them with the brain [26], but more profoundly to merge human and machine cognition. However, attributing intentionality and agency to hybrid networks rather than individual human operators introduces a range of questions concerning command responsibility, decision authority, and transparency, such as: Under what circumstances would the use of neurocognitive weapons be justified? What rules define the cyborg warrior as a legal subject? If intentionality and hence responsibility are distributed in hybrid network arrangements, who can be held legally and ethically accountable in case of misconduct or malfunction of the system? What military doctrines, policies, and agreed protocols would provide a governing framework for the use of weaponized neurotechnology? [27] To frame it differently: Cyborg and neurocognitive weapons systems mediate not only the perception and management of risk. They also co-produced a new domain of risk, situated and performed at the intersection of human and artificial intelligence, demarcating the frontiers of neuro space. This is what we have characterized as a transgressive property of cyborg weapons systems, challenging existing international laws and conventions of war.

Cyborg Ethics and Network Command

While advanced brain-computer networks could enable dramatic improvements in the mission performance of both human operators and autonomous machines, such hybrid system architectures will require a change of existing norms and categorizations of what makes up a moral agent and a legal subject in military operations. This poses significant challenges to existing doctrines and conventions of war that must be addressed before the fielding of cyborgs and neuroweaponry. Since they contain many of the defining features of robotic and autonomous weapons systems, they also contain many of the same potential problems. [28] Much of the difficulty encountered in the ongoing controversy between robotic and autonomous weapons systems is centered on the key questions of command responsibility, transparency, and the ability of the system to explain recommendations.

“Artificial intelligence support technologies must be able to explain recommendations, and autonomous systems provide data, that explains decisions. System integration, interchangeability, and communication require that the Joint Force define standards for architecture, language, and protocols between robotic and autonomous systems, platforms, and payloads,” RAS MDO white paper.

If military personnel is asked to adopt AI-enabled systems, they must be able to trust that these systems work as intended. Until now, one of the main obstacles to building trust in and exploiting the potential of autonomous weapons systems has been the lack of transparency in AI-based processes and semantics. As remarked by the NATO Sub-Committee on Technology Trends and Security (2019): “Today, it is still very difficult and sometimes impossible to understand if AI systems draw the right conclusions and even how they arrive at those conclusions”. [29] The systems often appear as ‘black boxes’ to researchers and operators. “Algorithms sometimes produce ‘odd’ results, solve problems in a counterintuitive or false manner, and sometimes even ‘cheat’”. [30] Even system engineers and programmers cannot fully explain why advanced AI algorithms choose some options and not others, and why they come up with the solutions they do. Without transparent and ‘explainable AI’, responsibility will be difficult to place, and trust in the system will be hard to attain.

The same obstacles emerge with the use of AI-enabled cyborgs and neurocognitive weapons systems. The problem lies in the transgressive nature of cyborg systems that are ruled not by human or machine intent, but by ‘hybrid intentionality’ distributed among multiple entities in the battle network. The notion of hybrid or composite phenomena does not sit well with the binary ‘either-or logic’ in military and legal terminology. Hence, the question remains: Who—human or machine—is ultimately responsible for decisions made or actions taken during mission execution with a neuro-enhanced brain-computer architecture?[31]

To answer this question and bridge the so-called ‘responsibility gap’, efforts have been made by the international security community under the auspices of the UN Convention on Certain Conventional Weapons (UN CCW) to define a set of governing principles for the use of robotic and autonomous systems in military operations. Although progress has been slow because of conflicting security interests and lack of a clear definition of the word ‘autonomous’, the consensus has been achieved on the somewhat vague notion that robotic and autonomous weapons systems must incorporate ‘meaningful human control’ to be lawfully deployed. [32]

The exercise of ‘meaningful human control’ is closely associated with compliance with the legal and ethical conventions of International Humanitarian Law. This includes the principles of distinction (Article 8(2)(b)(i)) and proportionality (Article 8(2)(b)(iv)), i.e. the ability to distinguish between combatants and non-combatants and avoid collateral damage and civilian injury that would be excessive compared to the expected military advantage. Critics have repeatedly stated such as Human Rights Watch and the Campaign to Stop Killer Robots that compliance with these principles requires human judgment and the capacity to adapt ethical considerations to a complex and unpredictable risk environment. Since robotic and autonomous machines do not possess the critical capacity of human judgment and adaptability to unforeseen situations, and since they cannot be held legally and ethically accountable for their actions, they are inherently unlawful weapons, some argue. [33]

The objection relies on two basic assumptions: 1) that autonomous machine cognition and behavior are uncontrollable once activated, and 2) that the use of such machines violates International Humanitarian Law. A network perspective on humans and intelligent machines can refute both assumptions as collaborative risk mediators. They can bridge the responsibility gap in cyborg and neurocognitive battle networks, keeping the human operator ‘in the loop’ and applying legal and ethical judgment (i.e. ‘meaningful human control’) in all phases of the military decision cycle. What we could dub ‘cyborg ethics’, however, requires a turn from a strict hierarchical model of military command to a network approach that recognizes the profound entanglement of human and machine perception and mediation of risk. A concept of ‘network command responsibility’ must be formed to comply with the legal and ethical criteria of International Law. To address these central aspects of cyborg ethics, we will draw on two equally important interpretations of the network discourse that have been introduced in the recent military and legal discussions: the key concepts of ‘network command’ [34] and ‘network share liability’. [35] Both concepts are needed to avoid ‘legal black holes’ and to bridge the responsibility gap in cyborg and neurocognitive weapons systems.

First, we will examine the concept of network command as a collaborative approach to human-machine decision-making and agency. In his international bestseller, Command. The Twenty-First-Century General, Anthony King, describes what he characterizes as a paradigmatic shift from a classical hierarchical command regime to a collective command regime. [36] According to King, the collective command regime reflects the development of a multi-dimensional and information-intensive operational environment that puts the traditional hierarchical command-and-control structure under pressure. The ambiguous and rapidly changing battlefield challenges the legal-rational and often cumbersome processes of the bureaucratic order. In a state of near-peer competition, where adversaries have successfully deployed their A2/AD capabilities, a new type of distributed and collaborative mission command is required to exploit cross-domain synergies, maximize effect, and outperform competitors:

“Complex environments require different leadership and decision-making techniques than succeeding in simple or complicated environments… experimentation and collaboration are keys to success in the complex domain… To enable collaboration, leaders, and staff must be capable of forming more flat, distributed organizations besides traditional hierarchical models.” [37]

The NATO Strategic Foresight Analysis (SFA) 2017 supports this view. In the SFA report, the shift to a more collaborative and innovative organization model that replaces traditional ‘stove-piped working practices’ is seen as a requirement in a future security environment characterized by growing interconnectedness, disruptive changes, and rapid technological advancements:

“This will require a shift from an organizational culture that takes an incremental approach, has stove-piped working practices and waits for greater clarity, to one that has a more collaborative approach that supports bold and innovative decisions”. [38]

Similarly, in the RAS MDO white paper, a more distributed and collaborative network approach to command and control is seen as necessary to maintain situational awareness and a Common Operational Picture “that captures all systems in real-time and allows for mission command of multiple manned and unmanned systems”, RAS MDO white paper 2018.

We argue the turn toward a network command regime can be seen as a response to the proliferation of embedded AI in decision support technologies, unmanned systems, wearable and portable devices, as well as adaptive, closed-loop brain-computer interfaces. [39] It reflects the growing influence of pervasive computing and augmented human-machine performance systems in a risk environment where “the increased number of sensors and platforms, all processing and transmitting high volumes of diverse data at tactical speeds, exceeds human cognitive capabilities in time-sensitive environments”. The network command regime is inextricably linked with the changing nature of a contested battle space that “requires the capability to execute tactical, operational, and strategic communications and data sharing beyond-line-of-sight through a secure, autonomous, self-healing and intelligent network”.

As the network discourse has gained growing influence in military strategy and doctrine development, the question of legal and ethical responsibility has been pushed to the front: How is the law to respond to a network command regime where decisions are shaped and distributed between multiple (human and non-human) operators, sensors, and platforms? How can we avoid legal black holes and assign responsibility to cyborgs and neurocognitive weapons systems?

According to sociologist and legal scholar Günther Teubner [40], the attribution of responsibility to complex collaborative networks challenges the binary logic of legal semantics and creates a general state of ‘irritation’ or ‘hybridization’ of law. Legal doctrine cannot simply adopt the term network command, but must itself reconstruct a legal definition out of its internal logic. [41] As a response to the network irritation, a new legal construct of network share liability emerges in hybrid law, distinguishable from both individual (contractual/market) liability and collective (corporate/hierarchical) liability. The construct of network share liability is especially suited to situations where the contribution of networked operators to mission execution cannot be traced back to individual nodes but only to the network itself:

“The form of liability is a decentralized, multiple, and collective combination of network liability and the liability of nodes who have taken part in the operation under scrutiny. In contrast to comprehensive collective liability with formal organizations, this leads to a re-individualization of collective liability within networks.” [42]

The legal solution of network share liability is to allow a ‘double attribution’ of responsibility to individual operators and the network; the same transaction is doubly attributed; to individual network nodes and the overall network. [43] No decision or course of action is seen as an isolated event, but always as part of a collective arrangement of humans and technologies.

With the legal construct of network share liability, the practice of hybrid law becomes responsive to the transgressive characteristics of cyborg weapons systems and the practice of a network command regime. As pointed out by legal scholar Inger Johanne Sand, hybrid law is a response to a growing demand for flexible and many-dimensional organizing concepts:

“To be relevant and effective law is using networks instead of only formal organizations, soft law instead of hard law, preambles and purpose-statements instead of formally binding obligations, references to knowledge and technologies instead of specific legal semantics.” [44]

The turn toward a network paradigm in military and legal semantics, we argue, mirrors the increasingly hybrid arrangements and practices of humans and intelligent technologies in advanced information and communication networks. It forms the basis of a new military and legal framework of cyborg ethics. The defining characteristic of cyborg ethics is the legal construct of double attribution of responsibility, i.e. the simultaneous attribution of responsibility to individual network nodes and the collective command network, combining principles of different and often contradictory legal regimes of collective and individual liability: “Instead of the binary distinction legal/non-legal, there are oscillations between different legalities… What is legal will then often be a close oscillation between contradictory legal norms and different values”. [45]

Thus, cyborg ethics involves the emergent practice of a more reflexive and context-sensitive form of jurisprudence, where obligations and prescriptions from multiple legal regimes interact to form a complex web of international governance. [46] As a new network risk management, the jurisprudence of cyborg ethics requires the ability of military officeholders and warfighters to coordinate and translate between a plurality of competing norms, standards, and values in military doctrines, legal conventions, and political programs. In contemporary sped-up and hybrid battlefields, this translation takes place in the interface of humans and intelligent technologies shaping critical decisions in the full spectrum of operations. [47]

Conclusion

Adopting the notion of shared human-machine agency and responsibility in cyborg and neurocognitive weapons systems allows us to transcend the classical human-centric and hierarchical order of military command and organization. To avoid so-called legal black holes and to bridge the responsibility gap in cyborg weapons systems, we propose a network approach to human-machine interaction and risk management that recognizes the intrinsic entanglement and co-constitution of human and machine intelligence. Formal organizational structures and legal orders are mandatory, but they must be coupled with a more reflexive and context-sensitive form of jurisprudence, i.e. the ability to evaluate, coordinate, and translate between multiple legal, ethical, and political definitions of transparency, accountability, and meaningful human control. In cyborg systems and augmented brain-computer interfaces, decision-making and hence risk management should be viewed as a joint effort of human operators and intelligent machines. Understanding how the realm of neuro space and cyborg warfare can be demarcated and regulated then requires interdisciplinary experimentation and collaboration between military operators, system engineers, lawyers, and policymakers. As robotic and cyborg weapons systems proliferate, general guidelines and rules of engagement will help to build trust in human-machine interaction and support decision-making in a contested and increasingly networked battle space. Promoting a timely and prudent discussion of cyborg ethics and the network command is not just a futuristic endeavor. It is a matter of urgency that governments and the international security community must consider in terms of reducing vulnerabilities and enhancing joint war-fighting capabilities.


[1] Wurzman, R., & Giordano, J. (2015). ’NEURINT’ and Neuro weapons: Neurotechnologies in National Intelligence and Defense. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 79–113). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454  

[2] Oie, K. S., & McDowell, K. (2015). Neurocognitive Engineering for Systems’ Development. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 33–50). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-5  

[3] Id.

[4] Farwell, J. P. (2015). Issues of Law Raised by Development and Use of Neuroscience and Neurotechnology in National Security and Defense. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 133–165). Boca Raton: CRC Press.  

[5] Krishnan, A. (2016). Military Neuroscience and the Coming Age of Neurowarfare. London: Routledge. DOI: https://doi.org/10.4324/9781315595429  

[6] Giordano, J. (2015). Neurotechnology, Global Relations, and National Security: Shifting Contexts and Neuroethical Demands. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 1–10). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-2  

[7] Emondi, A. (2020). Next Generation Non-Surgical Neurotechnology – DARPA homepage. Retrieved 31.03.2020 from https://www.darpa.mil/program/next-generation-nonsurgical-neurotechnology  
NATO Science and Technology Organization. (2020). Science and Technology Trends 2020–2040: Exploring the S&T Edge.
NATO Headquarters, Brussels.

[8] National Research Council of the National Academy of Sciences. (2008). Emerging Cognitive Neuroscience and Related Technologies. Washington, DC: The National Academies Press. DOI: https://doi.org/10.17226/12177  

[9] Id.

[10] Functional neuroimaging technologies rely on the imaging of localized neural activity coupled to different types of stimuli, e.g. visual, cognitive, emotional, or physical tasks. 

[11] National Research Council of the National Academy of Sciences. (2008). Emerging Cognitive Neuroscience and Related Technologies. Washington, DC: The National Academies Press. DOI: https://doi.org/10.17226/12177  

[12] NATO Science and Technology Organization. (2020). Science and Technology Trends 2020–2040: Exploring the S&T Edge.

NATO Headquarters, Brussels.

[13] Id.

[14] Id.

[15] Id.

[16] Emondi, A. (2020). Next Generation Non-Surgical Neurotechnology – DARPA homepage. Retrieved 31.03.2020 from https://www.darpa.mil/program/next-generation-nonsurgical-neurotechnology  

[17] Oie, K. S., & McDowell, K. (2015). Neurocognitive Engineering for Systems’ Development. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 33–50). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-5  

[18] Schmorrow, D. D., & Kruse, A. A. (2004). Augmented Cognition. In W. S. Bainbridge (Ed.), Berkshire Encyclopedia of Human-Computer Interaction (pp. 54–59). Great Barrington, Massachusetts: Berkshire Publishing Group.  

[19] van Gerven, M., Farquhar, J., Scaefer, R., Vlek, R., Geuze, J., Nijholt, A., Ramsey, N., Haselager, P., Vuurpijl, L., Gielen, S., & Desain, P. (2009). The brain-computer interface cycle. Journal of Neural Engineering, 6(4), 1–10. DOI: https://doi.org/10.1088/1741-2560/6/4/041001  

[20] Id.

[21] Id.

[22] Murray, S., & Yanagi, M. A. (2015). Transitioning Brain Research: From Bench to Battlefield. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 11–22). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-3  

[23] Dynamic targeting consists of six distinct steps: find, fix, track, target, engage, and assess (F2T2EA). Retrieved 24.06.2020 from: https://www.doctrine.af.mil/Portals/61/documents/Annex_3-60/3-60-D17-Target-Dynamic-Task.pdf. 

[24] The ‘OODA loop’ refers to the four phases of the military decision cycle: observe, orient, decide, act. See Boyd (1995). 

[25] Verbeek, P. (2009). Moralizing Technology: on the morality of technical artifacts and their design. In D. Kaplan (Ed.), Readings in the Philosophy of Technology (pp. 226–243). Lanham: Rowman and Littlefield  

[26] National Research Council of the National Academy of Sciences. (2008). Emerging Cognitive Neuroscience and Related Technologies. Washington, DC: The National Academies Press. DOI: https://doi.org/10.17226/12177  

[27] McCreight, R. (2015). Brain Brinkmanship: Devising Neuro weapons Looking at Battlespace, Doctrine and Strategy. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 115–132). Boca Raton: CRC Press.  Farwell, J. P. (2015). Issues of Law Raised by Development and Use of Neuroscience and Neurotechnology in National Security and Defense. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 133–165). Boca Raton: CRC Press.  

[28] Murray, S., & Yanagi, M. A. (2015). Transitioning Brain Research: From Bench to Battlefield. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 11–22). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-3  

[29] NATO Science and Technology Organization. (2020). Science and Technology Trends 2020–2040: Exploring the S&T Edge.

NATO Headquarters, Brussels.

[30] Tonin, M. (2019). Artificial Intelligence: Implications for NATO’s Armed Forces. NATO Science and Technology Committee, Sub-Committee on Technology Trends and Security (STCTTS). Retrieved from https://www.nato-pa.int/download-file?filename=sites/default/files/2019-10/REPORT%20149%20STCTTS%2019%20E%20rev.%201%20fin-%20ARTIFICIAL%20INTELLIGENCE.pdf  

[31] Murray, S., & Yanagi, M. A. (2015). Transitioning Brain Research: From Bench to Battlefield. In J. Giordano (Ed.), Neurotechnology in National Security and Defense: Practical Considerations, Neuroethical Concerns (pp. 11–22). Boca Raton: CRC Press. DOI: https://doi.org/10.1201/b17454-3  

[32] Horowitz, M. C., & Scharre, P. (2015). Meaningful Human Control in Weapon Systems: A Primer. Project on Ethical Autonomy No. 1. Washington, DC: Center for a New American Security. Retrieved from https://www.files.ethz.ch/isn/189786/Ethical_Autonomy_Working_Paper_031315.pdf  

[33] Sharkey, N. (2008). Cassandra or False Prophet of Doom: AI Robots and War. IEEE Intelligent Systems, 23(4), 14–17. DOI: https://doi.org/10.1109/MIS.2008.60  

[34] King, A. (2019). Command – The Twenty-First-Century General. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/9781108642941  

[35] Teubner, G. (2004). Coincidentia Oppositorium: Hybrid Networks Beyond Contract and Organization. Storrs Lectures 2003/4, Yale Law School.  

[36] King, A. (2019). Command – The Twenty-First-Century General. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/9781108642941  

[37] Klein 2017, as cited in King, A. (2019). Command – The Twenty-First-Century General. Cambridge: Cambridge University Press. DOI: https://doi.org/10.1017/9781108642941  

[38] NATO. (2017). Strategic Foresight Analysis. Retrieved from https://www.act.nato.int/images/stories/media/doclibrary/171004_sfa_2017_report_hr.pdf  

[39] Skinner, A., Russo, C., Baraniecki, L., & Maloof, M. (2014). Ubiquitous Augmented Cognition. In D. D. Schmorrow & C. M. Fidopiastis (Eds.), Foundations of Augmented Cognition: Advancing Human Performance and Decision-Making through Adaptive Systems (pp. 67–77). Springer International Publishing. DOI: https://doi.org/10.1007/978-3-319-07527-3_7  

[40] Teubner, G. (2004). Coincidentia Oppositorium: Hybrid Networks Beyond Contract and Organization. Storrs Lectures 2003/4, Yale Law School.  

[41] Id.

[42] Id.

[43] Id.

[44] Sand, I. J. (2012). Hybridization, Change and the Expansion of Law. In N. Å. Andersen & I. J. Sand (Eds.), Hybrid Forms of Governance: Self-suspension of Power (pp. 186–204). Basingstoke: Palgrave Macmillan. DOI: https://doi.org/10.1057/9780230363007_11  

[45] Id.

[46] Crootof, R. (2015). The Varied Law of Autonomous Weapon Systems. In A. P. Williams & P. Scharre (Eds.), Autonomous Systems: Issues for Defence Policymakers (pp. 98–126). Norfolk, Virginia: Allied Command Transformation (ACT).  

[47] Nørgaard, K. (2017). A Study of Military Technopolitics: The Controversy of Autonomous Weapon Systems. Copenhagen: Royal Danish Defence College. Retrieved from https://pure.fak.dk/files/7137147/A_Study_of_Military_Technopolitics_NET.pdf  

.


Scroll to Top