Remy Maduit | Authors published
DEFENSE & SECURITY FORUM
Automating the OODA Loop in the Age of Intelligent Machines
Reaffirming the Role of Humans in Command-and-control Decision-making in the Digital Age
James Johnson is a Lecturer in Strategic Studies at the University of Aberdeen, King’s College,
an Honorary Fellow at the University of Leicester,
a Non-Resident Associate on the ERC-funded Towards a Third Nuclear Age Project,
a Mid-Career Cadre with the Center for Strategic Studies (CSIS) Project on Nuclear Issues, UK.
Volume I. Issue 2, 2022
Defense & Security Forum
Remy Mauduit, Editor-in-Chief
James Johnson (2022) Automating the OODA Loop in the Age of Intelligent Machines: Reaffirming the Role of Humans in Command-and-control Decision-making in the Digital Age, Defense Studies, DOI: 10.1080/14702436.2022.2102486.
ARTICLE INFO
Article history
Keywords
Artificial intelligence
machine learning
command and control
OODA loop
military theory
mission command
ABSTRACT
This article argues that artificial intelligence (AI) enabled capabilities cannot effectively or reliably complement (let alone replace) the role of humans in understanding and apprehending the strategic environment to make predictions and judgments that inform strategic decisions. The rapid diffusion of and growing dependency on AI technology at all levels of warfare will have strategic consequences that counterintuitively increase the importance of human involvement in these tasks. Therefore, restricting the use of AI technology to automate decision-making tasks at a tactical level will do little to contain or control the effects of this synthesis at a strategic level of warfare. The article re-visits John Boyd’s observation-orientation-decision-action metaphorical decision-making cycle (or “OODA loop”) to advance an epistemological critique of AI-enabled capabilities (especially machine learning approaches) to augment command-and-control decision-making processes. In particular, the article draws insights from Boyd’s emphasis on “orientation” as a schema to explain the role of human cognition (perception, emotion, and heuristics) in defense planning in a non-linear world characterized by complexity, novelty, and uncertainty. It also engages with the Clausewitzian notion of “military genius”—and its role in “mission command”—human cognition, systems, and evolution theory to consider the strategic implications of automating the OODA loop.
.
This article argues that artificial intelligence (AI) enabled capabilities cannot effectively, reliably, or safely compliment—let alone replace—humans in understanding and apprehending the strategic environment to make predictions and judgments to inform and shape command-and-control (C2) decision-making—the authority and direction assigned to a commander. [1] The rapid diffusion of and growing dependency on AI technology (especially machine learning (ML)) [2] to augment human decision-making at all levels of warfare harbinger strategic consequences that counterintuitively increase the importance of human involvement in these tasks across the entire chain of command. [3] Because of the confluence of several cognitive, geopolitical, and organizational factors, the line between machines analyzing and synthesizing (i.e. prediction) data that informs humans who decide (i.e. judgment) will become increasingly blurred human-machine decision-making continuum. When the handoff between machines and humans becomes incongruous, this slippery slope argument will make efforts to impose boundaries or contain the strategic effects of AI-supported tactical decisions inherently problematic and unintended strategic consequences more likely.
The article re-visits John Boyd’s observation-orientation-decision-action metaphorical decision-making cycle (or “OODA loop”) to advance an objective epistemological critique of using AI-ML-enabled capabilities to augment command-and-control decision-making processes. [4] Toward this end, the article draws insights from Boyd’s emphasis on “Orientation” (or “The Big O”) to explain the role of human cognition (perception, emotion, and heuristics) in defense planning and the importance of understanding the broader strategic environment in a non-linear world characterized by complexity, novelty, and uncertainty. It also engages with the Clausewitzian notion of “military genius” (especially its role in “mission command”) [5], human cognition [6], and systems and evolution theory [7], to consider the strategic implications of automating the OODA loop. The article speaks to the growing body of recent literature that considers the strategic impact of adopting AI technology—and autonomous weapons, big data, cyberspace, and other emerging technologies associated with the “fourth industrial revolution”—in the military decision-making structures and processes. [8]
The article contributes to understanding the implications of AI’s growing role in human decision-making in military C2. While the diffusion and adoption of “narrow” AI systems have had some success in nonmilitary domains to make predictions and support— linear-based—decision-making (e.g. commercial sector, healthcare, and education), AI in a military context is much more problematic. [9] Specifically, military decision-making in non-linear, complex, and uncertain environments entails much more than copious, cheap datasets and inductive machine logic. In command-and-control decision-making, commanders’ intentions, the rules of law and engagement, and ethical and moral leadership are critical to effective and safe decision-making in the application of military force. Because machines cannot perform these intrinsically human traits, thus the role of human agents will become even more critical in future AI-enabled warfare. [10] As geostrategic and technological deterministic forces spur militaries to embrace AI systems in the quest for first-mover advantage and reduce their perceived vulnerabilities in the digital age, commanders’ intuition, latitude, and flexibility will be demanded to mitigate and manage the unintended consequences, organizational friction, strategic surprise, and dashed expectations associated with implementing military innovation. [11]
The article is organized into three sections. The first unpacks Boyd’s OODA loop concept and its broader contribution to military theorizing, particularly the pivotal role of cognition in command decision-making to understand and survive in a priori strategic environments within the broader framework of complex adaptive organizational systems dynamic non-linear environment. Section two contextualizes Boyd’s loop analogy with nonlinearity, chaos, complexity, and system theories with the recent developments in AI-ML technology to consider the potential impact of integrating AI-enabled tools across the human-machine command-and-control decision-making continuum. This section also considers the potential strategic implications of deploying AI-ML systems in unpredictable and uncertain environments with imperfect information. Will AI ease or exacerbate war’s “fog” and “friction”?
The second section explores human-machine teaming in high-intensity and dynamic environments. How will AI cope with novel strategic situations compared to human commanders? It argues that using AI-ML systems to perform even routine operations during complex and fast-moving combat environments is problematic and that tactical leaders exhibiting initiative, flexibility, empathy, and creativity remain critical. This section also contextualizes the technical characteristics of AI technology with the broader external strategic environment. Will AI-enabled tools complement, supplant, or obviate the role of human “genius” in mission command? The final section considers the implications of AI-ML systems for the relationship between tactical unit leaders and senior commanders. Specifically, it explores the potential impact of AI-enabled tools that improve situational awareness and intelligence, surveillance, and reconnaissance (ISR) on the notion of the twenty-first century “strategic corporals” and juxtaposes the specter of “tactical generals.”
The “real” OODA loop is more than just speed
John Boyd’s OODA loop has been firmly established in many strategic, business, and military tropes. [12] Several scholars have criticized Boyd’s OODA loop concept as overly simplistic, too abstract, and over-emphasizing speed and information dominance in warfare. Critics argue, for instance, that beyond pure granular tactical considerations (i.e. air-to-air combat), the OODA loop has minimal novelty or utility at a strategic level, for instance, in managing nuclear brinkmanship, civil wars, or insurgencies. [13] Some have also lambasted the loop for lacking originality, as pseudoscience (informed by thermodynamics, quantum mechanics, and human evolution) and lacking scholarly rigor. The OODA concept struggles to meet the rigorous social science standards of epistemological validity, theoretical applicability, falsifiability, and robust empirical support.
Others argue these criticisms misunderstand the nature, rationale, and richness of the OODA concept and thus understate Boyd’s contribution to military theorizing, particularly the role of cognition in command decision-making. [14] In short, the OODA concept is much less a rigorously tested epistemological or ontological model for warfaring (which the author never intended), but a helpful analogy—akin to Herman Kahn’s “escalation ladder” psychological metaphor [15]—for elucidating C2 decision-making cognitive processes and dynamics of commanders and their organizations as they “adjust or change to cope with new and unforeseen circumstances”. [16] Boyd’s concept is analogous to organizational and individual psychological experiences of the OODA loop in their respective temporal and spatial journeys across the strategic environment. [17]
The OODA concept was not designed as a comprehensive means to explain the theory of victory at the strategic level. Instead, the concept needs to be viewed as part of a broader cannon on conceptualizations Boyd developed to explain the complex, unpredictable, and uncertain dynamics of warfare and strategic interactions. [18] The popularized depictions of the simplified version of Boyd’s concept (or the “simple OODA loop,” see Figure 1) neglects Boyd’s comprehensive rendering of the loop (or the “real OODA loop,” see Figure 2)—with insights from cybernetics, systems theory, chaos and complexity theory, and cognitive science—that Boyd introduced in his final presentation, The Essence of Winning and Losing. [19] The influence of this science and scholarly Zeitgeist most clearly manifests in the OODA loop’s vastly overlooked “orientation” element. [20]
Figure 1. The “simple” OODA loop.
Figure 2. The “real” OODA loop.
The “big o” the center of gravity in warfare
Insights from the cognitive revolution (Polanyi 1969) coupled with Neo-Darwinist research on the environment [21], the Popperian process of adaptation and hypothesis testing [22], and complexity and chaos theory [23], heavily influenced the genesis of Boyd’s “orientation” schemata comprising: political and strategic cultural traditions, organizational friction, experience and learning, new and novel information, and the analysis and synthesis of this information (see below). [24] Viewed this way, “observation” is a function of inputs from external information and understanding the interaction of these inputs with the strategic environment. Boyd wrote, “orientation, seen, as a result, represents images, views, or impressions of the world shaped by genetic heritage, cultural traditions, previous experiences, and unfolding circumstances”. [25] Without these attributes, humans lack the psychological skills and experience to comprehend and survive in an a priori strategic environment. The OODA loop concept is a shorthand for these complex dynamics.
Elaborating on this idea, Boyd asserted that orientation “shapes the way we interact with the environment—hence orientation shapes the way we observe, the way we decide, the way we act. Orientation shapes the character of present observation-orientation-decision-action loops—while these present loops shape the character of future orientation” (emphasis added). [26] Without the contextual understanding provided by “orientation” in analyzing and synthesizing information, “observations” of the world have limited meaning. These assertions attest to the pivotal role of “orientation” in Boyd’s OODA decision-making analogy, the critical connecting thread enabling commanders to survive, adapt, and effectively decide under uncertainty, ambiguity, complexity, and chaos. [27] Boyd’s OODA analogy and notions of uncertainty and ambiguous information and knowledge are equally (if not more so) relevant for strategy in the digital information age—combining asymmetric information, mis-disinformation, mutual vulnerabilities, destructive potential, and decentralized, dual-use, and widely diffused AI technology. [28]
According to Boyd, “how orientation shapes observation, shapes decision, shapes action and is shaped by the feedback and other phenomena coming into our sensing or observing window” (emphasis added). Thus, the loop metaphor depicts an ongoing, self-correcting, and non-static process of prediction, filtering, correlation, judgment, and action. Scientists, including Ian Stewart and James Maxwell, have similarly cautioned against assuming that the world is stable, static, and thus predictable. [29] These processes contain complex feed-forward and feedback loops (or “double-loop learning”) that implicitly influence “decision” and “action” and inform hypothesis testing of future decisions. [30] For instance, while military theorists often categorize politics as extrinsic to war (i.e. a linear model), the feedback loops which operate from the use of force to politics and from politics to the use of force are intrinsic to war. [31] As a corollary, Boyd argues that C2 systems must embrace (and not diminish) the role of implicit orientation and their attendant feedback loops. [32]
Similarly, Clausewitz argued that the interactive nature of war generates system-driven dynamics comprising human psychological forces and characterized by positive and negative feedback loops, thus leading to potentially limitless versions of ways to seek one-upmanship in competition “to compel our enemy to do our will”. War, Clausewitz argues, is a “true chameleon,” exhibiting randomness, chance, and different characteristics in every instance. Therefore, it is impossible, as many military theorists tend, to force war into compartmentalized sequential models of action and counter-reaction for theoretical simplicity. [33] Instead, effective commanders will explore ways to exploit war’s unpredictability and non-linear nature to gain the strategic upper hand. [34] At an organizational level, systems are driven by the behavior of individuals who act based on their intentions, goals, perceptions, and calculations, an interaction that generates high complexity and unpredictability. [35]
Therefore, a broader interpretation of the OODA loop analogy is best viewed as less a general theory of war than an attempt to depict the strategic behavior of decision-makers within the broader framework of complex adaptive organizational systems in a dynamic non-linear environment. [36] According to Clausewitz, general (linearized) theoretical laws used a perspective to explain the non-linear dynamics of war must be heuristic; each war is a “series of actions obeying its peculiar law”. [37] Colin Gray opined that Boyd’s OODA loop is a grand theory because the concept has an elegant simplicity, a vast application domain, and valuable insights about strategic essentials. (38]
Intelligent machines in non-linear chaotic warfare
Nonlinearity, chaos theory, complexity theory, and systems theory have been broadly applied to understand organizational behavior and intra-organizational and inter-organizational dynamics associated with competition and war. [39] Charles Perrow introduced organizations can be categorized as either simple/linear (stable, regular, and consistent) or complex/non-linear (unstable, irregular, and inconsistent) systems with tight or loose couplings. [40] Non-linear systems do not obey proportionality or additivity; they can behave erratically through disproportionally large or small outputs, exhibiting interactions in which the whole is not equal to the sum of the parts. [41] The real world has always contained an abundance of non-linear phenomena, for example, fluid turbulence, combustion, breaking or cracking, biological evolution, and biochemical reactions in living organisms. [42] Perrow’s idea of coupling and complexity is critical in a system’s susceptibility to accidents and failure’s speed, severity, and probability. All things being equal, tightly coupled systems such as AI-ML algorithms react faster, unpredictably, with extensive multi-layered interconnections, compared to loosely coupled systems such as universities—which, when everything runs smoothly, afford more time for response and recovery. [43]
According to Perrow, the problem arises when a system is both complex and tightly coupled. Whereas complexity optimally requires a decentralized response, tight coupling suggests a centralized approach—to ensure a swift response and recovery from accidents before tightly coupled processes cause failure. In response to complex and tightly coupled systems, Perrow dismisses the efficacy of decentralizing decision-making at lower levels in an organization (i.e. commanders on the battlefield) because, in complex/tightly coupled systems, potential failures throughout the system are unforeseeable and highly contingent. [44] Perrow writes that alterations to any one component in a complex system “will either be impossible because some others will not cooperate or inconsequential because some others will be allowed more vigorous expression. [45]
Similar dynamics (randomness, complex interactions, and long and intricate chains) can also be found in ecological systems. [46] Because of the intrinsic unpredictability of complex systems, evolutionary biology is—that is, compared to conventional international relations (IR) theories—particularly amenable to understanding IRs such as alliance alignments, the balance of power, signaling deterrence and resolve, assessments of actor’s intentions, and diplomatic and foreign policy mechanisms and processes. [47] In complex systems, Robert Jervis notes, “problems are seldom solved once and for all; initial policies no matter how well designed can be definitive [machine generated] solutions will generate unexpected difficulties”. [48] Because of the interconnectedness of world politics—imposed on states by the structures of international relations and actors’ perceptions of others’ intentions and policy preferences, low-intensity disputes can be strategically consequential. [49]
During conflict and crisis, where the information quality is poor (i.e. information asymmetry) and judgment and prediction are challenging, decision-makers must balance these competing requirements. Automation is an asset when high information quality can be combined with precise and relatively predictable judgments. Clausewitz highlights the relationship between the unpredictability of war—caused by interaction, friction, and chance—as a manifestation and contributor to the role of nonlinearity. Clausewitz wrote, for instance, “war is not an exercise of the will directed at inanimate matter… or at matter which is animate but passive and yielding… In war, the will is directed at an animate object that reacts”—and thus, the causes that the outcome of the action cannot be predicted. [50] Inanimate intelligent machines operating and reacting in human-centric (“animate”) environments exhibiting interaction, friction, and chance cannot predict outcomes and, thus, control. While human decision-making under these circumstances is far from perfect, the unique interaction of factors including, inter alia, psychological, social, cultural (individual and organizational), ideology, emotion, experience (i.e. Boyd’s “orientation” concept), and luck, give humans a sporting chance to make clear-eyed political and ethical judgments in the chaos (“fog”) and nonlinearity (“friction”) of warfare. [51]
The computer revolution and recent AI-ML approaches have made militaries increasingly reliant on statistical probabilities and self-organizing algorithmic rules to solve complex problems in the non-linear world. According to Maxwell, analytical mathematical rules are not always reliable guides to the real world “where things never happen twice”. [52] AI-ML techniques (e.g. image recognition, pattern recognition, and natural language processing) inductively (inference from general rules) fill the gaps in missing information to identify patterns and trends; increasing the speed and accuracy of certain standardized military operations, including open-source intelligence collation, satellite navigation, and logistics. However, because these quantitative models are isolated from the broader external strategic environment of probabilities rather than axiomatic certainties characterized by Boyd’s “orientation,” human intervention remains critical to avoid distant analytical abstraction and causal deterministic predictions during the non-linear chaotic war. Several scholars assume that the perceived first-mover benefits of AI-augmented war machines will create self-fulfilling spirals of security dilemma dynamics that will upend deterrence. [53]
AI’s exacerbating the “noise” and “friction” of war
Because AI-ML predictions and judgments deteriorate where data is sparse (i.e., nuclear war), low quality (i.e., biased, intelligence is politicized, or data is poisoned or manipulated by mis-disinformation) [54], the military strategy requires Clausewitzian human ”genius” to navigate battlefield ”fog,” and the political, organizational, and information (or ”noise” in the system) ”friction” of war. [55] For example, the lack of training data in the nuclear domain means that AI would depend on synthetic simulations to predict how adversaries might react during brinkmanship between two or more nuclear-armed states. [56] Nuclear deterrence is a nuanced perceptual dance in competition and manipulation between adversaries, “keeping the enemy guessing” by leaving something to chance. [57]
Therefore, datasets will need to reflect the broader strategic environment military decision-makers face in a military context, including the distinct doctrinal, organizational, and strategic cultural approaches of allies and adversaries. Even where situations closely mirror previous events, a dearth of empirical data to account for war’s contingent, chaotic, and random nature makes statistical probabilistic AI-ML reasoning a very blunt instrument. AI’s predicting and reacting to a priori novel situations will increase the risk of mismatch—between algorithmically optimized goals and the evolving strategic environment—and misperception will heighten the risk of accidents (e.g. targeting errors or false alerts) and inadvertent catastrophe. In unpredictable and uncertain environments with imperfect information that requires near-perfect confidence levels, simulations and synthetic data sets are technically limited. [58]
To cope with novel strategic situations and mitigate unintended consequences, human “genius”—the contextual understanding afforded by Boyd’s “orientation”—is needed to finesse multiple flexible, sequential, and resilient policy responses. Jervis writes that “good generals not only construct fine war plans but also understand that events will not conform to them”. [59] Unlike machines, humans use abductive reasoning (or inference to the best explanation) and introspection (or “metacognition” [60]) to think laterally and adapt and innovate in novel situations. [61] Faced with uncertainty or lack of knowledge and information, people adopt a heuristic approach—cognitive shortcuts or rules of thumb derived from experience, learning, and experimentation—promoting intuition and reasoning to solve complex problems. [62] While human intuitive thinking heuristics often produce biases and cognitive blind spots, it also offers a very effective means to make quick judgments and decisions under stress in a priori situations. AI systems use heuristics derived from vast training datasets to make inferences that inform predictions; they lack, however, human intuition that depends on experience and memory.
Recent empirical studies show that the ubiquity of “friction” in AI-ML systems, designed to reduce the “fog” of war and thus improve certainty, can create new or exacerbate legacy, accountability, security, and interoperability issues, generating more “friction” and uncertainty. [63] In theory, there is ample potential for AI-ML tools (e.g. facial and speech recognition, AI-enhanced space satellite navigation, emotion prediction, and translation algorithms) to benefit military operations where vast amounts of disparate information (i.e. “noise”) data and metadata (that labels and describes data) is often incomplete, overlooked, or misdiagnosed. For example, reducing the data processing burden, monitoring multiple data feeds, and highlighting unexpected patterns for intelligence, surveillance, and reconnaissance (ISR) operations are integral to improving tactical and strategic command decision-making. [64]
Intelligence tasks involve ambiguity, deception, manipulation, and mis-disinformation, which require nuanced interpretation, creativity, and tactical flexibility. Like C2 reporting systems, intelligence operations are more of an art than a science, where human understanding of the shifting strategic landscape is critical in enabling commanders to predict and respond to unexpected events, changes in strategic objectives, or intelligence politicization—which will require updated data, thus rendering existing data redundant or misleading. Operational and planning tasks—that inform decision-making—cannot be delegated to AI-ML systems or used in human-machine teaming without commanders being fully mindful of the division of labor and the boundaries between human and machine control (i.e. humans in vs. out of the loop).
Human intervention is critical in deciding when and how changes to the algorithm’s configuration (e.g. the tasks it is charged with, the division of labor, and the data it is trained on) are needed, which changes in the strategic environment demand. Rather than complimenting human operators’ linear algorithms trained on static datasets will exacerbate the “noise” in non-linear, contingent, and dynamic scenarios such as tracking insurgents and terrorists or providing targeting information for armed drones and missile guidance systems. Some argue that AI systems designed to “lift the fog of war” might instead compound the friction within organizations with unintended consequences, particularly when disagreements, bureaucratic inertia, or controversy exists about war aims, procurement, civil-military relations, and the chain of command amongst allies.
“Rapid looping” and the dehumanization of warfare
AI-ML systems that excel at routine and narrow tasks and games (e.g. DeepMind’s StarCraft II and DARPA’s Alpha Dogfight [65] with clearly defined pre-determined parameters in relatively controlled, static, and isolated (i.e. there is no feedback) linear environments—for example, such as logistics, finance, and economics, data collation—are found wanting when it comes to addressing politically and morally strategic questions in the non-linear world of C2 decision-making. [66] For what national security interests are we prepared to sacrifice soldiers’ lives? What stage on the escalation ladder should a state sue for peace over escalation? When do the advantages of empathy and restraint trump coercion and the pursuit of power? At what point should actors step back from the brink of crisis bargaining? How should states respond to deterrence failure, and what if allies view things differently?
In high-intensity and dynamic combat environments such as densely populated urban warfare—even where well-specified goals and standard operating procedures exist—the latitude and adaptability of “mission command” remains critical, and the functional utility of ML-AI tools for even routine “task orders” (i.e. the opposite of “mission command”) problematic. [67] Routine task orders such as standard operating procedures, doctrinal templates, explicit protocols, and logistics performed in dynamic combat settings still have the potential for accidents and risk of life, commanders exhibiting initiative, flexibility, empathy, and creativity are needed. [68] Besides, the implicit communication, trust, and a shared outlook that “mission command” imbibes across all levels make micro-management by senior commanders less necessary—that is, it permits tactical units to read their environment and respond within the overall framework of strategic goals defined by senior commanders—thus potentially speeding up the OODA cycle. Boyd writes, “the cycle time increases commensurate with an increase in the level of organization, as one tries to control more levels and issues… the faster rhythm of the lower levels must work within the larger and slower rhythm of the higher levels so that the overall system does not lose its cohesion or coherency” (emphasis added). [69]
War is not a game; instead, it is intrinsically structurally unstable; an adversary rarely plays by the same rules and, to achieve victory, often attempts to change the rules that exist or invent new ones. The diffusion of AI-ML will unlikely assuage this ambiguity in a myopic and likely ephemeral quest, as many have noted, to speed up and compress the command-and-control OODA decision cycle—or Boyd’s “rapid looping.” Instead, policymakers risk being blind-sided by the potential tactical utility—where speed, scale, precision, and lethality coalesce to improve situational awareness—offered by AI-augmented capabilities, without sufficient regard for the potential strategic implications of artificially imposing non-human agents on the human endeavor of warfare. The appeal of “rapid looping” may persuade soldiers operating in a high-stress environment with large amounts of data (or “data tsunami”) to use AI tools to offload cognitively, thus placing undue confidence and trust in machines—known as “automation bias”. [70] Recent studies show that the more cognitively demanding, time-pressured, and stressful a situation is, the more likely humans are to defer to machine judgments. [71]
NATO’s Supreme Allied Command Transformation is working with a team at Johns Hopkins University to develop an AI-enabled “digital triage assistant” to attend to injured combatants—trained on injury datasets, casualty scoring systems, predictive modeling, and inputs of a patient’s condition—to decide who should receive prioritized care during conflict and mass casualty events (e.g. the Russian-Ukrainian conflict) where resources are limited. [72] On the one hand, AI-enabled digital assistants can make quick decisions intense, complex, and fast-moving situations using algorithms and data (especially much-vaunted “big-data” sources), and arguably, removing human biases that could reduce human error—caused by cognitive bias, fatigue, and stress, for example—and potentially save lives. [73]
Critics are concerned about how these algorithms will cause some combatants (allies, adversaries, civilian conscripts, volunteers, etc.) to get prioritized for care over others. [74] If, for example, there was a large explosion and civilians were among the people harmed (e.g. the Kabul Airport bombing in 2021), would they get less priority, even if they were severely injured? Would soldiers defer to an algorithm’s judgment regardless of whether facts on the ground suggested otherwise during an intense situation? Further, if the algorithm plays a part in someone dying, who would be held responsible?
Mounting evidence of bias compounds these ethical conundrums—such as when algorithms in health care prioritize white patients over black ones for getting care—in AI datasets that can perpetuate biased decision-making. [75] In contexts where judgment and decisions directly (or indirectly) can affect human safety, algorithmic designers cannot remove entirely unforeseen biases or prepare AI to cope with a priori situations. Besides, an optimized algorithm (even if human engineers determine these goals) [76] cannot encode the broad range of values and issues humans care about, such as empathy, ethics, compassion, and mercy—not to mention Clausewitzian courage, coup d’oeil, primordial emotion, violence, hatred, and enmity—which are critical for strategic thinking. In short, when human life is at stake, the ethical, trust, and moral bar for technology will always be higher than those we set for accident-prone humans.
Notwithstanding the many vexing ethical and morals about the intersection between people, algorithms, and ethics [77] introducing non-human agents onto the modern battlefield in extremis risks atrophying the vital feedback—or Boyd’s “double-looping learning” of empathy, correlation, and rejection—in “mission command” between tactical unit leaders who interpret and execute war plans crafted by generals—or the “strategic corporals” vs. “tactical generals” problem discussed below. In this symbiotic relationship, mismatches, miscommunication, or accidents would critically undermine the role of “genius” in mission command on the modern battlefield—combining AI-ML technology, asymmetrical information and capabilities, and multi-domain operations where it is in highest demand.
Butterfly effects, unintended consequences, and accidents
Even a well-running optimized algorithm is vulnerable to adversarial attacks which may corrupt its data, embed biases, or become a target of novel tactics that seek to exploit blind spots in a system’s architecture (or “going beyond the training set”), which the AI cannot predict and thus effectively counter. [78] Algorithms optimized to fulfill a specific goal in unfamiliar domains (i.e. nuclear war) and contexts—or if deployed inappropriately—false positives are possible, which inadvertently spark escalatory spirals. [79] Other technical shortcomings of AI-ML systems (especially newer unsupervised models) tested in dynamic non-linear contexts such as autonomous vehicles include (1) algorithmic inaccuracies; (2) misclassification of data and anomalies in data inputs and behavior; (3) vulnerability to adversarial manipulation (e.g. data-poisoning, false-flags, or spoofing); and (4) and erratic behavior and ambiguous decision-making in new and novel interactions. [80] These problems are structural, not bugs that can be patched or easily circumvented.
In war, much like other domains like economics and politics, there is a new problem for every solution that AI (or social scientists) can conceive. Thus, algorithmic recommendations which may look technically correct and inductively sound may have unintended consequences unless they are accompanied by novel strategies by policymakers who are (in theory) psychologically and politically prepared to cope with these consequences with flexibility, resilience, and creativity—or the notion of “genius” in mission command discussed below. Chaos and complexity theories can help to explain the potential impact of the interaction of algorithms with the real world. Specific interactions in physics, biology, and chemistry, for instance, do not produce more properties as a summing between them; instead, the opposite occurs; A non-additive chemical phenomenon that does not have a numerical value equal to the sum of values for the parts. [81] Combining two medical treatments, for example, can produce considerably more than a double dose effect in the patient or even a single but unexpected outcome. Recent research has found that computers cannot capture the behavior of the complexity of real-world chaotic dynamical events such as climate change. [82]
The coalescence of multiple (supervised, unsupervised, reinforcement and deep learning, etc.), complex (military and civilian datasets), and tightly coupled and compressed (convolution neural networks, artificial neural networks, Bayesian networks, etc.) ML algorithms, sensitive to small changes in the non-linear real world’s initial conditions, could generate a vast amount of stochastic behavior (or “butterfly” effects), thus increasing the risk of unintended consequences and accidents. Therefore, without an improved technical understanding of how these systems interact (exacting patterns from the world) and connect with the real world (the “explainability” problem) [83] as well as a consensus between militaries (allies/partners and adversaries) and other stakeholders on how AI-ML might be normatively aligned with human values, ethics, and goals. This synthesis will require robust boundaries and controls that delineate the anomaly generating noisy data of the virtual world from the friction and chaos of war. [84]
Conceptually speaking, boundaries might be placed between AI analyzing and synthesizing (prediction) data that inform humans who decide (judgment); for example, through recruitment, the use of simulations and wargaming exercises, and training combatants, contractors, algorithm engineers, and policymakers in human-machine teaming. However, the confluence of several factors will probably blur these boundaries along the human-machine decision-making continuum (see Figure 3): cognitive (automation bias, cognitive offloading, and anthropomorphizing) [85]; organizational (intelligence politicization, bureaucratic inertia, unstable civil-military relations) [86]; geopolitical (first-mover pressures, security dilemma dynamics) [87]; and divergent attitudes (of both allies and adversaries) to military ethics, risk, deterrence, escalation, and misaligned algorithms. [88]
Figure 3. The human-machine decision-making continuum.
AI’s isolation from the broader external strategic environment (i.e. the political, ethical, cultural, and organizational contexts depicted in Boyd’s “real OODA loop”) is no substitute for human judgment and decisions in chaotic and non-linear situations. Even in situations where algorithms are functionally aligned with human decision-makers—that is, with knowledge of crucial human decision-making attributes—human-machine teaming risks diminishing the role of human “genius” where it is in high demand. Commanders are less psychologically, ethically, and politically prepared to respond to nonlinearity, uncertainty, and chaos with flexibility, creativity, and adaptivity. Because of the non-binary nature of tactical and strategic decision-making—tactical decisions are not made in a vacuum and invariably have strategic effects—using AI-enabled digital devices to complement human decisions will have strategic consequences that increase the importance of human involvement in these tasks.
How much does a human decision to use military force derived from data mined, synthesized, and interpreted by AI-ML algorithms possess more human agency than a decision executed fully autonomously by a machine? AI researcher Stuart Russell argues that through conditioning what and how data is presented to humans’ decision-makers – without disclosing what has been omitted or rejected – AI possesses “power” over human’s “cognitive intake”.[89] This “power” runs in opposition to one of AI-ML’s most touted benefits: reducing the cognitive load on humans in high-stress environments.
In a recent Joint All-Domain Command and Control (JADC2) report, the US Department of Defense (DoD) proposed integrating AI-ML technology into C2 capabilities across all domains to exploit AI-enhanced remote sensors, intelligence assets, and open sources to “sense and integrate” (i.e. Boyd’s “observe” and “orientate”) information to “make sense” (i.e. analysis, synthesis, and predict) of the strategic environment, so the “decision cycle” operates “ faster relative to adversary abilities. [90] The JADC2 report obfuscates the strategic implications of automating the OODA loop for tactical gains and the illusionary clarity of certainty. In a similar vein, researchers in China’s PLA Daily (the official newspaper of China’s People’s Liberation Army) argue that advances in AI technology will automate the OODA loop for command decision-making of autonomous weapons and drive the broader trend toward machines replacing human observation, judgment, prediction, and action. [91] While the PLA Daily authors stress the importance of training and human-machine “interfacing,” like the DoD’s JADC2, they also omit consideration of the strategic implications of this technologically determined “profound change”. [92]
Static, pre-defined, and isolated algorithms are not the answer to the quintessentially non-linear, chaotic, and analytically unpredictable nature of war. Therefore, an undue focus on speed and decisive tactical outcomes (or completing the “kill-chain”) in the decision-making loop underplays AI’s influence in command-and-control decision-making activities across the full spectrum of operations and domains. As the US-led Multinational Capability Development Campaign (MCDC) notes: “Whatever our C2 models, systems, and behaviors of the future will look like, they must not be linear, deterministic and static. They must be agile, autonomously self-adaptive and self-regulating”.[93]
AI-empowered “strategic corporals” vs. “tactical generals”
US Amy Gen. Charles Krulak coined the term “strategic corporal” to describe the strategic implications which flow from the increasing responsibilities and pressures placed on small-unit tactical leaders due to rapid technological diffusion and the resulting operational complexity in modern warfare that followed the information revolution-based revolution in military affairs in the late-1990ʹs.[94] Krulak argues that recruitment, training, and mentorship will empower junior officers to exercise judgment, leadership, and restraint to become effective “strategic corporals.”
On the digitized battlefield, tactical leaders will need to judge the reliability of AI-ML predictions, determine algorithmic outputs’ ethical and moral veracity, and judge in real-time whether, why, and to what degree AI systems should be recalibrated to reflect changes to human-machine teaming and the broader strategic environment. “Strategic corporals” will need to become military, political, and technological “geniuses.” While junior officers have displayed practical bottom-up creativity and innovation in using technology in the past, the new multi-directional pressures from AI systems will be unlikely to be resolved by training and recruiting practices. Instead, pressures to decide in high-intensity, fast-moving, data-centric, multi-domain human-teaming environments might undermine the critical role of “mission command,” which connects tactical leaders with the political-strategic leadership—namely, the decentralized, lateral, and two-way (explicit and implicit) communication between senior command and tactical units. Despite the laudable efforts of the DoD to instill tactical leaders with the “tenants of mission command” to leverage AI-ML in joint force multi-domain environments, the technical and cognitive burden on subordinate commanders—singularly tasked with completing the entire OODA loop–will probably be too great. [95] The US Army’s reliance on electronic communications.
Under tactical pressures to compress the decision-making, reduce “friction,” and speed up the OODA loop, tactical leaders may make unauthorized modifications to algorithms (e.g. reconfigure human-machine teaming, ignore AI-ML recommendations), [96] or launch cross-domain countermeasures in response to adversarial attacks) that puts them in direct conflict with other parts of the organization or contradicts the strategic objectives of political leaders. According to Boyd, the breakdown of the implicit communication and bonds of trust that defines “mission command” will produce “confusion and disorder, which impedes vigorous or directed activity, hence, magnifies friction or entropy”—precisely the outcome of the empowerment of small-group tactical leaders on the twenty-first-century battlefield was intended to prevent. [97] Adversarial attacks on electronic communications may also precipitate this breakdown (e.g. electronic warfare jammers and cyber-attacks) that advanced militaries rely on, thus forcing tactical leaders to fend for themselves. [98] In 2009, for example, the Shiite militia fighters used off-the-shelf software to hack into unsecured video feed from US Predator drones. Militarily advanced near-peer adversaries such as China and Russia have well-equipped electronic warfare (EW) for offensive jamming (or triangulating their sources for bombardment), sophisticated hackers, and drones to target precision weapons. [99] To avoid this outcome, Boyd advocates a highly decentralized hierarchical structure that allows tactical commanders initiative while insisting that senior command resist the temptation to overly cognitively invest in and interfere with tactical decisions. [100]
On the other side of the command spectrum, AI-ML augmented ISR, autonomous weapons, and real-time situational awareness might produce a juxtaposed yet contingent phenomenon, the rise of “tactical generals”.[101] As senior commanders gain unprecedented access to tactical information, the temptation to micro-manage and directly intervene in tactical decisions from afar will rise. Who understands the commander’s intent better than the generals themselves? While AI-ML enhancements can certainly help senior commanders become better informed and take personal responsibility for high-intensity situations as they unfold, the thin line between timely intervention and obsessive micro-management is a fine one. Centralizing the decision-making process – and contrary to both Boyd’s guidance and the notion of “strategic corporals” – and creating a new breed of “tactical generals” micro-managing theater commanders from afar, AI-ML enhancements might compound the additional pressures placed on tactical unit leaders to speed up the OODA loop, and become political, and technological “geniuses.” This dynamic may increase uncertainty and confusion and amplify friction and entropy.
Micromanaging or taking control of tactical decision-making could also mean that young officers lack the experience in making complex tactical solutions in the field, which might cause confusion or misperceptions in the event communications are compromised and the “genius” of “strategic corporals” is demanded. For example, interpreting sensor data (e.g. from warning systems, electronic support measures, and optronic sensors) from different platforms, monitoring intelligence (e.g. open-source data from geospatial and social media), and using historical data (e.g. data on the thematic bases for producing temporal geo-spatialization of an activity) relating to the strategic environment of previous operations. [102]
US then-Army Chief of Staff, General Mark Milley, recently stated that the Army is “overly centralized, overly bureaucratic, and overly risk averse, which is the opposite of what we’re going to need in any type of warfare”. [103] Milley stressed the need to decentralize leadership, upend the Army’s deep culture of micromanagement, empower junior officers, and thus strengthen mission command. Further, the breakdown in lateral relations and communication across the command chain would also risk diminishing the crucial tactical insights that inform and shape strategizing, potentially impairing political and military leadership, undermining civil-military relations, and causing strategic-tactical mismatches, which during a crisis may spark an inadvertent escalation.
Tactical units operating in the field using cloud computing technology (or “tactical cloud”) coupled with other AI-enabling sensors and effectors to ensure interoperability, resilience, and digital security, and close to combat situations such as rugged, urban, complex terrains – where logistics and communications lines are put under intense stress – would be well placed to guide and inform strategic decision-making. Thus, senior commanders would indubitably benefit from the speed and richness of two-way information flows as the “tactical cloud” matures; for example, information on topography, the civilian environment, 3D images of the combat distribution of airspace volumes, complement C2 strike and reconnaissance tracking and targeting for drone swarming and missile strike systems and verifying open-source intelligence and debunking of mis-disinformation.[104]
Absent the verification and supervision of machine decisions by tactical units (e.g. the troop movement of friendly and enemy forces), if an AI system gave the green light to an operation in a fast-moving combat scenario—when closer examination of algorithmic inputs and outputs are tactically costly—false positives from automated warning systems, mis-disinformation, or an adversarial attack, would have dire consequences. [105] Tactical units executing orders received from brigade headquarters—and assuming a concomitant erosion of two-way communication flows—may not only diminish the providence and fidelity of information received by senior commanders deliberately from their ivory towers but also result in unit leaders following orders blindly and eschewing moral, ethical, or even legal concerns. [106] Psychology research has found that humans perform poorly at setting objectives and are predisposed to harm others if ordered by an “expert” (or “trust and forget”. [107] As a corollary, human operators may view AI systems as agents of authority (i.e. more intelligent and more authoritative than humans) and thus be more inclined to follow their recommendations blindly; even in the face of information (e.g. that debunks mis-disinformation) that shows they would be wiser not to.
Whether the rise of “tactical generals” compliments, subsumes, or conflicts with Krulak’s vision of the twenty-first century “strategic corporals, and the impact of this interaction on battlefield decision-making are open questions. The literature on systems, complexity, and nonlinearity suggests that the coalesce of centralized, hierarchical structures, tight-coupled systems, and complexity generated by this paradigm would make accidents more likely to occur and much less easy to expect. [108] The erosion of “mission command” this phenomenon augurs would dramatically reduce the prospects of officers down the chain of command overriding or pushing back against top-down tactical decisions; for example, in 1983, when a Soviet air force lieutenant, Stanislav Petrov, overrode a nuclear launch directive from an automated warning system that mistook light reflecting off clouds for an inbound ballistic missile. [109] AI-ML data synthesis and analysis capacity can enhance the decision-making process’s prediction (or “observation”) element. However, it still needs commanders at all levels to recognize the strengths and limitations of AI intuition, reasoning, and foresight. [110] Professional military training and education of military professionals will be critical in tightly integrated, non-linear, and interdependent tasks where the notion of handoffs between machines and humans becomes incongruous.
Conclusion
Whether or when AI will continue to trail, match, or surpass human intelligence and, in turn, complement or replace humans in decision-making is a necessary speculative endeavor but a vital issue to deductively analyze, nonetheless. That is, develop a robust theory-driven hypothesis to guide analysis of the empirical data about the impact of the diffusion and synthesis of “narrow” AI technology in command-and-control decision-making processes and structures. This endeavor has a clear pedagogical utility, guiding future military professional education and training in the need to balance data and intuition as human-machine teaming matures. A policy-centric utility includes adapting militaries (doctrine, civil-military relations, and innovation) to the changing character of war and the likely effects of AI-enabled capabilities – which already exist or are being developed – on war’s enduring chaotic, non-linear, and uncertain nature. Absent fundamental changes to our understanding of the impact (cognitive effects, organizational, and technical) of AI on the human-machine relationship, we risk not only failing to harness AI’s transformative potential but, more dangerously, misaligning AI capabilities with human values, ethics, and norms of warfare that spark unintended strategic consequences. Future research would be welcome on how to optimize C2 processes and structures to cope with evolving human-machine relations and adapt military education to prepare commanders (at all levels) for this paradigm shift.
The article argued that static, pre-defined, and isolated AI-ML algorithms do not answer war’s non-linear, chaotic, and analytically unpredictable nature. Therefore, an undue focus on speed and tactical outcomes to complete the decision-making loop (or Boyd’s “rapid OODA looping”) understates the permeation of AI in command-and-control decision-making across the full spectrum of operations and domains. Insights from Boyd’s “orientation” schemata – particularly as they relate to human cognition in command decisions and the broader strategic environment – coupled with systems, chaos, and complexity theories still have a pedagogical utility in understanding these dynamics. The article argued that a) speeding up warfare and compressing decision-making (or automating the OODA loop) will have strategic implications that cannot be easily anticipated (by humans or AI’s) contained; b) the line between the handoff from humans to machines will become increasingly blurred; and thus, c) the synthesis and diffusion of AI into the military decision-making process increases the importance of human agency across the entire chain of command.
While the article agrees with the conventional wisdom that AI is already and will continue to have a transformative impact on warfare, it finds fault with the prevailing focus of militaries on harnessing the tactical potential of AI-enabled capabilities in the pursuit of speed and rapid decision-making. This undue focus risk blind-siding policymakers without sufficient regard for the potential strategic implications of artificially and inappropriately imposing non-human agents on the fundamentally human nature of warfare. Misunderstanding the human-machine relationship during fast-moving, dynamic, complex battlefield scenarios will likely undermine the critical symbiosis between senior commanders and tactical units (or “mission command”), which increase the risk of mismatches accidents, and inadvertent escalation. In extremis, the rise of “tactical generals” (empowered with AI tools making tactical decisions from afar) and the concomitant atrophy of “strategic corporals” (junior officers exercising judgment, leadership, and restraint) will create highly centralized and tight-coupled systems that make accidents more probable less predictable. Paradoxically, therefore, AI’s used to complement and support humans in decision-making might obviate the role of human “genius” in mission command when it is most demanded.
Although the article focused on “narrow” task-specific AI, the prospects of AGI—and aside from the hype surrounding what a “third-wave” of genuine intelligence might look like, whether it is even technically possible, and broader debates about “intelligence” and machine consciousness—are confounding. Conceptually speaking, AGI systems could complete the entire OODA loop without human intervention—and, depending on the goals humans program it to optimize—would be able to out-fox, manipulate, deceive, and overwhelm any human adversary. Pursuing these goals at all costs and absent an off switch (which adversaries would presumably seek to activate), AGI would unlikely attach much importance to the various ethical and moral features of warfare that humans care about most). [111] In imagined future wars between rival AIs who define their objectives and possess a sense of existential threat to their survival, the role of humans in warfare—aside from suffering the physical and virtual consequences of dehumanized autonomous hyper-war—is unclear. In this scenario, “strategic corporals” and “tactical generals” would become obsolete, and machine “genius”—however that might look—would change Clausewitz’s nature of war.
[1] Margaret, A. Boden. 2014. “AI Its Nature and Future (Oxford: Oxford University Press, 2016).” In David Vernon, Artificial Cognitive Systems: A Primer. Cambridge, MA: MIT Press. [Google Scholar]
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. [Google Scholar]
Cantwell Smith, Brian. 2019. The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge: Massachusetts Institute of Technology Press. [Crossref], [Google Scholar]
There are three different types of AI: artificial “narrow” intelligence (ANI), artificial general intelligence (AGI) that matches human levels of intelligence, and artificial superintelligence (ASI) (or “superintelligence”) that exceeds human intelligence. The debate about when and whether AGI (let alone ASI) will emerge is highly contested and thus inconclusive. This article focuses on task-specific “narrow AI” (or “weak AI”), which is rapidly diffusing and maturing (e.g. facial recognition, natural language processing, navigation, and digital assistants such as Siri and Google Assistant).
[2] Terrence, J. Sejnowski. 2018. The Deep Learning Revolution. Cambridge: Massachusetts Institute of Technology Press. [Google Scholar]
Domingos, Pedro. 2012. “A Few Useful Things to Know about Machine Learning.” Communications of the ACM 55 (10): 78–87. doi:https://doi.org/10.1145/2347736.2347755. [Crossref], [Web of Science ®], [Google Scholar]
Russell, Stuart, and Peter Norvig. 2014. Artificial Intelligence: A Modern Approach. 3rd ed (Pearson ed. Education: Harlow. [Google Scholar]
[3] Machine learning (ML) – closely associated with “narrow AI” – is a widely used form of statistical prediction that applies vast training datasets (e.g. metadata and Big-Data) to inductively generate missing information to improve the quantity, and accuracy, complexity, and speed of predictions. There are three main machine learning approaches – supervised, unsupervised, and reinforcement – differentiated by the type of feedback that contributes to the algorithm learning process: (1) in “supervised” learning, an algorithm is trained to produce hypotheses or take specific actions to pursue pre-determined objectives or outputs based on specific inputs (or labels) (e.g. image, text, and speech recognition); (2) in “unsupervised” learning (including neural networks and probabilistic methods) the algorithm has no set parameters or labels; instead, the system learns by finding patterns in the data (e.g. protein folding and DNA sequencing); (3) in “reinforcement learning” the system uses feedback loops to reinforce the algorithms learning process through a reward-and-punishment process (e.g. autonomous vehicles and robotics) to optimize the overall “reward.”
[4] Boulanin, Vincent. edited by. Artificial Intelligence, Strategic Stability and Nuclear. Stockholm:SIPRI Publications June 2020 [Google Scholar]
Many AI-enabled weapons have already been deployed, operational, or developed by militaries. Examples include: Israel’s autonomous Harpy loitering munition; China’s “intelligentized” cruise missiles, AI-enhanced cyber capabilities, and AI-augmented hypersonic weapons; Russia’s armed and unarmed autonomous unmanned vehicles and robotics; and US “loyal wingman” human-machine teaming (unmanned F-16 with a manned F-35 or F-22) program, intelligence, surveillance, and reconnaissance (ISR) space-based systems, and various AI-ML-infused command and control support systems.
[5] Grauer, Ryan. 2016. Commanding Military Power: Organizing for Victory and Defeat on the Battlefield. Cambridge: Cambridge University Press. [Crossref], [Google Scholar]
Beyerchen, Alan. 1992-1993. “Clausewitz, Nonlinearity, and the Unpredictability of War.” International Security 17 (3): 74–75. doi:https://doi.org/10.2307/2539130. [Crossref], [Google Scholar]
Biddle, Stephen. 2004. Military Power: Explaining Victory and Defeat in Modern Battle. Princeton, NJ: Princeton University Press. [Crossref], [Google Scholar]
King, Anthony. 2019. Command: The Twenty-First-Century General. Cambridge: Cambridge University Press. [Crossref], [Google Scholar]
[6] Kahneman, Daniel, Paul Slovic, and Amos Tversky, (edited by). 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. [Crossref], [Google Scholar]
Baron, Jonathan. 2008. Thinking and Deciding. 4th ed. Cambridge: Cambridge University Press. [Google Scholar]
Robert, B. Cialdini.2006.Influence: The Psychology of Persuasion.New York: Harper Business.rev. ed [Google Scholar]
[7] Jantsch, Erich. 1980. The Self-Organizing Universe, Scientific and Human Implications of the Emerging Paradigm of Evolution. Oxford: Pergamon Press. [Google Scholar]
Prigogine, Ilya, and Isabella Stengers, Order out of Chaos (London: Penguin Random House. (1984). ““Complexity: The Science, Its Vocabulary, and Its Relation to Organizations.” Emergence 1: 1, pp. 110–126. Michael Lissack. [Google Scholar]
Perrow, Charles. 1999. Normal Accidents. Princeton: Princeton University Press. [Google Scholar]
Jervis, Robert. 1997. System Effects, Complexity in Political and Social Life. Princeton: Princeton University Press. [Google Scholar]
Thomas, J. Czerwinski. 1999. Coping with the Bounds, Speculations on Nonlinearity in Military Affairs. Washington, DC: National Defence University Press. [Google Scholar]
[8][8] Raska, M. 2021. “Michael Raska “The Sixth RMA Wave: Disruption in Military Affairs?” Journal of Strategic Studies 44 (4): 456–479. doi:https://doi.org/10.1080/01402390.2020.1848818. [Taylor & Francis Online], [Web of Science ®], [Google Scholar]
Talmadge, Caitlin. 2019. “Emerging Technology and Intra-War Escalation Risks: Evidence from the Cold War, Implications for Today.” Journal of Strategic Studies 42 (6): 864–887. doi:https://doi.org/10.1080/01402390.2019.1631811. [Taylor & Francis Online], [Web of Science ®], [Google Scholar]
Goldfarb, Avi, and Jon Lindsay. 2022. “Prediction and Judgment, Why Artificial Intelligence Increases the Importance of Humans in War.” International Security 46 (3): 7–50. doi:https://doi.org/10.1162/isec_a_00425. [Crossref], [Google Scholar]
[9] Agrawal, Ajay, Joshua Gans, and Avi Goldfarb. 2018. Prediction Machines: The Simple Economics of Artificial Intelligence. Cambridge, Mass.: Harvard Business Review Press. [Google Scholar]
Furman, Jason, and Robert Seamans. 2018. “AI and the Economy.” In Innovation Policy and the Economy, edited by Lerner, Josh and Scott Stern, 161–191. Vol. 19. Chicago: University of Chicago Press. [Google Scholar]
[10] Johnson, James. 2021. Artificial Intelligence & the Future of Warfare: USA, China, and Strategic Stability. Manchester: Manchester University Press. [Crossref], [Google Scholar]
[11] Michael, C. Horowitz. 2010. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton, NJ: Princeton University Press. [Google Scholar]
Stephen, P. Rosen. 2010. “The Impact of the Office of Net Assessment on the American Military in the Matter of the Revolution in Military Affairs.” Journal of Strategic Studies 33 (4): 469–482. doi:https://doi.org/10.1080/01402390.2010.489704. [Taylor & Francis Online], [Google Scholar]
[12] Grant, T. Hammond. 2013. “Reflections on the Legacy of John Boyd.” Contemporary Security Policy 34 (3): 600–602. doi:https://doi.org/10.1080/13523260.2013.842297. [Taylor & Francis Online], [Google Scholar]
Osinga, Frans. 2013. “Getting’ A Discourse on Winning and Losing: A Primer on Boyd’s ‘Theory of Intellectual Evolution.” Contemporary Security Policy 34 (3): 603–624. doi:https://doi.org/10.1080/13523260.2013.849154. [Taylor & Francis Online], [Google Scholar]
[13] Hasik, James. 2013. “Beyond the Briefing: Theoretical and Practical Problems in the Works and Legacy of John Boyd.” Contemporary Security Policy 34 (3, December): 583–599. doi:https://doi.org/10.1080/13523260.2013.839257. [Taylor & Francis Online], [Google Scholar]
Storr, Jim. 2001. “Neither Art nor Science – Towards a Discipline of Warfare.” RUSI Journal 146 (2, April): 39–45. doi:https://doi.org/10.1080/03071840108446627. [Taylor & Francis Online], [Google Scholar]
[14] Osinga, Frans. 2013. “Getting’ A Discourse on Winning and Losing: A Primer on Boyd’s ‘Theory of Intellectual Evolution.” Contemporary Security Policy 34 (3): 603–624. doi:https://doi.org/10.1080/13523260.2013.849154. [Taylor & Francis Online], [Google Scholar]
[15] Kahn, Herman. 1965. On Escalation: Metaphors and Scenarios. New York: Harvard University Press. [Google Scholar]
[16] Boyd, John, Patterns of Conflict (unpublished presentation, draft version, 1982). [Google Scholar]
[17] Osinga, Frans. 2013. “Getting’ A Discourse on Winning and Losing: A Primer on Boyd’s ‘Theory of Intellectual Evolution.” Contemporary Security Policy 34 (3): 603–624. doi:https://doi.org/10.1080/13523260.2013.849154. [Taylor & Francis Online], [Google Scholar]
[18] Id.
[19] Boyd, John, The Essence of Winning and Losing (Unpublished presentation, 1995). [Google Scholar]
[20] Id.
[21] Bryant Conant, James. 1964. Two Modes of Thought. New York: Trident Press. [Google Scholar]
[22] Popper, Karl. 1968. The Logic of Scientific Discovery. New York. [Google Scholar]
[23] Byrne, David. 1998. Complexity Theory and the Social Sciences, an Introduction. London: Routledge. [Google Scholar]
Durham, Susanne. 2012. Chaos Theory For The Practical Military Mind. New York: Biblioscholar. [Google Scholar]
[24] Gardner, Howard. 1985. The Mind’s New Science, A History of the Cognitive Revolution. New York: Basic Books. [Google Scholar]
von Neumann, John. 1958. The Computer and the Brain. New Haven, CT: Yale University Press. [Google Scholar]
Wiener, Norbert. 1967. The Human Use of Human Beings: Cybernetics and Society. New York: Avon Books. [Google Scholar]
Ryle, Gilbert. 1966. The Concept of Mind. London: Hutchinson. [Google Scholar]
[25] Boyd, John, Organic Design for Command and Control (Unpublished presentation, 1987). [Google Scholar]
[26] Id.
[27] Boyd, John, Organic Design for Command and Control (Unpublished presentation, 1987). [Google Scholar]
[28] Kissinger, Henry A., Eric Schmidt, and Daniel Huttenlocher. 2021. The Age of AI and Our Human Future. London: John Murray. [Google Scholar]
Trinkunas, Harold, Herbert Lin, and Benjamin Loehrke. 2020. Three Tweets to Midnight: Effects of the Global Information Ecosystem on the Risk of Nuclear Conflict. Stanford, CA: Hoover Institution Press. [Google Scholar]
[29] Clerk Maxwell, James. 1969. Lewis Campbell and William Garnett “Science and Free Will.” The Life of James Clerk Maxwell. New York: Johnson Reprint Corporation. [Google Scholar]
[30] However, some scholars contend that learning from complexity and uncertainty from past events is ambiguous.
[31] Roche, James, and Barry Watts. 1991. “Choosing Analytic Measures.” Journal of Strategic Studies 13 (2): 165–209. doi:https://doi.org/10.1080/01402399108437447. [Taylor & Francis Online], [Google Scholar]
Overy, Richard. 1981. The Air War 1939-1945. New York: Potomac Books. [Google Scholar]
[32] Roche, James, and Barry Watts. 1991. “Choosing Analytic Measures.” Journal of Strategic Studies 13 (2): 165–209. doi:https://doi.org/10.1080/01402399108437447. [Taylor & Francis Online], [Google Scholar]
Overy, Richard. 1981. The Air War 1939-1945. New York: Potomac Books. [Google Scholar]
[33] Paret, Peter. 1985. Clausewitz, and the State: The Man, His Theories and His Times. Princeton: Princeton University Press. [Crossref], [Google Scholar]
In contrast with Clausewitz’s non-linear approach, other classical military theorists such as Antoine- Henri de Jomini and Heinrich von Bulow adopt implicit linear approaches to manipulate empirical evidence selectively.
[34] Beyerchen, Alan. 1992-1993. “Clausewitz, Nonlinearity, and the Unpredictability of War.” International Security 17 (3): 74–75. doi:https://doi.org/10.2307/2539130. [Crossref], [Google Scholar]
[35] Jervis, Robert. 1997. System Effects, Complexity in Political and Social Life. Princeton: Princeton University Press. [Google Scholar]
[36] Dooley, Kevin. 1996. “A Nominal Definition of Complex Adaptive Systems.” Chaos Network 8 (1): 2–3. [Google Scholar]
Complex Adaptive System (CAS) is a concept that refers to a group of semi-autonomous agents who interact in interdependent ways to produce system-wide patterns, such that those patterns then influence the behavior of the agents. In human systems at all scales, you see patterns that emerge from agents’ interactions in that system.
[37] Von Clausewitz, Carl. 1976. On War, ed. Howard, Michael and Peter Paret. Princeton: Princeton University Press. [Crossref], [Google Scholar]
[38] Gray, Colin. 1999. Modern Strategy. London: Oxford University Press. [Google Scholar]
[39] Jervis, Robert. 1997. System Effects, Complexity in Political and Social Life. Princeton: Princeton University Press. [Google Scholar]
Perrow, Charles. 1999. Normal Accidents. Princeton: Princeton University Press. [Google Scholar]
[40] Id.
[41] Beyerchen, Alan. 1992-1993. “Clausewitz, Nonlinearity, and the Unpredictability of War.” International Security 17 (3): 74–75. doi:https://doi.org/10.2307/2539130. [Crossref], [Google Scholar]
[42] Campbell, David. 1987. “Nonlinear Science: From Paradigms to Practicalities.” Los Alamos Science 15: 218–262. [Google Scholar]
[43] Perrow, Charles. 1999. Normal Accidents. Princeton: Princeton University Press. [Google Scholar]
[44] Id.
[45] Id.
[46] Ehrenfeld, David. 1991. “The Management of Diversity: A Conservation Paradox.” In Ecology, Economics, Ethics: The Broken Circle, edited by Bormann, F. Herbert and Stephen Kellert, 26–39. New Haven: Yale University Press. [Google Scholar]
[47] Bernstein, Steven, Richard Ned Lebow, Janice Gross Stein, and Steven Weber. 2000. “God Gave Physics the Easy Problems: Adapting Social Science to an Unpredictable World.” European Journal of International Relations 6 (1): 43–76. doi:https://doi.org/10.1177/1354066100006001003. [Crossref], [Web of Science ®], [Google Scholar]
[48] Jervis, Robert. 1997. System Effects, Complexity in Political and Social Life. Princeton: Princeton University Press. [Google Scholar]
[49] Id.
[50] Von Clausewitz, Carl. 1976. On War, ed. Howard, Michael and Peter Paret. Princeton: Princeton University Press. [Crossref], [Google Scholar]
[51] Gleick, James. 1987. Chaos: The Making of a New Science. New York: Viking. [Google Scholar]
[52] Clerk Maxwell, James. 1969. Lewis Campbell and William Garnett “Science and Free Will.” The Life of James Clerk Maxwell. New York: Johnson Reprint Corporation. [Google Scholar]
[53] Michael, C. Horowitz. 2019. “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability.” Journal of Strategic Studies 42 (6): 764–788. doi:https://doi.org/10.1080/01402390.2019.1621174. [Taylor & Francis Online], [Google Scholar]
Kenneth Payne, I. 2021. Warbot: The Dawn of Artificially Intelligent Conflict. New York: Oxford University Press. [Crossref], [Google Scholar]
Johnson, James, and Eleanor Krabill. 2020. “AI, Cyberspace, and Nuclear Weapons.” War on the Rocks. January. [Google Scholar]
[54] Several military organizations are testing alternative AI-ML approaches to compensate for the lack of labeled data (i.e. real-world information from the battlefield), which is needed to train existing supervised ML systems. These new approaches combine supervised ML with unsupervised deep-learning approaches, which work with a limited amount of annotated data. “Unsupervised machine learning in the military domain,” NATO Science & Technology Organization, 27 May 2021, https://www.sto.nato.int/Lists/STONewsArchive/displaynewsitem.aspx?ID=642.
[55] Shaw, Robert. 1981. “Strange Attractors, Chaotic Behavior, and Information Flow.” Zeitschrift der Naturforschung 36 (1): 80–112. doi:https://doi.org/10.1515/zna-1981-0115. [Crossref], [Google Scholar]
According to information theory, the more possibilities and information a system has, the greater amount of “friction” and “noise” it embodies.
[56] Johnson, James, and Eleanor Krabill. 2020. “AI, Cyberspace, and Nuclear Weapons.” War on the Rocks. January. [Google Scholar]
[57] Thomas, C. Schelling. 1960. The Strategy of Conflict, 199–201. Cambridge, MA: Harvard University Press. [Google Scholar]
[58] Paul, K. Davis, and Paul Bracken. 2022. “Artificial Intelligence for Wargaming and Modeling.” The Journal of Defense Modeling and Simulation. [Google Scholar]
[59] Jervis, Robert. 1997. System Effects, Complexity in Political and Social Life. Princeton: Princeton University Press. [Google Scholar]
[60] “Metacognition” is a familiar concept to master chess players who can shift their thought concentration to execute moves when faced with complex problems and trade-offs.
[61] Silver, Nate. 2015. The Signal and the Noise: Why so Many Predictions Fail: But Some Don’t, 272–273. New York, NY: Penguin Books. [Google Scholar]
[62] Gladwell, Malcolm, and Blink. 2005. The Power of Thinking without Thinking. New York, NY: Little Brown and Company. [Google Scholar]
Herbert, A. Simon. 1987. “Making Management Decisions: The Role of Intuition and Emotions.” The Academy of Management Executive 1 (1, February): 57–64. [Google Scholar]
[63] Michael, C. Horowitz. 2010. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton, NJ: Princeton University Press. [Google Scholar]
Timothy, S. Wolters. 2013. Information at Sea: Shipboard Command and Control in the US Navy, from Mobile Bay to Okinawa. Baltimore, Md.: Johns Hopkins University Press. [Google Scholar]
Nina, A. Kollars. 2015. “War’s Horizon: Soldier-Led Adaptation in Iraq and Vietnam.” Journal of Strategic Studies 38 (4): 529–553. doi:https://doi.org/10.1080/01402390.2014.971947. [Taylor & Francis Online], [Google Scholar]
[64] Timothy, S. Wolters. 2013. Information at Sea: Shipboard Command and Control in the US Navy, from Mobile Bay to Okinawa. Baltimore, Md.: Johns Hopkins University Press. [Google Scholar]
Nina, A. Kollars. 2015. “War’s Horizon: Soldier-Led Adaptation in Iraq and Vietnam.” Journal of Strategic Studies 38 (4): 529–553. doi:https://doi.org/10.1080/01402390.2014.971947. [Taylor & Francis Online], [Google Scholar]
[65] Team, AlphaStar. “Alphastar: Mastering the Real-Time Strategy Game Starcraft II.” DeepMind Blog. 24 January 2019 [Google Scholar]
[66] Raul, S. Ferreira. “Machine Learning in A Nonlinear World: A Linear Explanation through the Domain of the Autonomous Vehicles.” European Training Network for Safer Autonomous Systems. 9 January 2020 [Google Scholar]
[67] Kramer, Eric-Hans, and E-H. Kramer. 2015. “Mission Command in the Information Age: A Normal Accidents Perspective on Networked Military Operations.” Journal of Strategic Studies 38 (4): 445–466. doi:https://doi.org/10.1080/01402390.2013.844127. [Taylor & Francis Online], [Google Scholar]
[68] Stephen, J. Cimbala. 2002. The Dead Volcano: The Background and Effects of Nuclear War Complacency. New York, NY: Praeger. [Google Scholar]
[69] Boyd, John, Organic Design for Command and Control (Unpublished presentation, 1987). [Google Scholar]
[70] Skitka, Linda J., Kathleen L. Mosier, and Mark Burdick. 1999. “Does Automation Bias Decision-Making?” International Journal of Human-Computer Studies 51 (5): 991–1006. doi:https://doi.org/10.1006/ijhc.1999.0252. [Crossref], [Web of Science ®], [Google Scholar]
[71] Mary, L. Cummings. “Automation Bias in Intelligent Time-Critical Decision Support Systems,” AIAA 1st Intelligent Systems Technical Conference, 2004, 557562557562 [Google Scholar]
[72] Graham, Catherine. “Undergrads Partner with NATO to Reduce Combat Casualties.” [The Hub]. 20 August 2021. [Google Scholar]
[73] “Developing Algorithms that Make Decisions Aligned with Human Experts.” DARPA.3 March 2022 [Google Scholar]
[74] Verma, Pranshu. “The Military Wants AI to Replace Human decision-making in Battle,” The Washington Post, 29 March 2022. [Google Scholar]
[75] Simonite, Tom. “A Health Care Algorithm Offered Les Care to Black Patients.” Wired. 24 October 2019 [Google Scholar]
[76] Goldfarb, Avi, and Jon Lindsay. 2022. “Prediction and Judgment, Why Artificial Intelligence Increases the Importance of Humans in War.” International Security 46 (3): 7–50. doi:https://doi.org/10.1162/isec_a_00425. [Crossref], [Google Scholar]
[77] Applin, Sally. 2018. “They Sow, They Reap: How Humans are Becoming Algorithm Chow.” IEEE Consumer Electronics Magazine 7 (2): 101–102. doi:https://doi.org/10.1109/MCE.2017.2776468. [Crossref], [Google Scholar]
Heather, M. Roff. 2019. “Artificial Intelligence: Power to the People.” Ethics & International Affairs 33 (2): 124–140. [Google Scholar]
Schwarz, Elke. 2018. Death Machines: The Ethics of Violent Technologies. Manchester: Manchester University Press. [Crossref], [Google Scholar]
[78] Biggio, Battista, and Fabio Roli. “Wild Patterns: Ten Years after the Rise of Adversarial Machine.” [Google Scholar]
Goodfellow, Ian, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. December 20 2014. arXiv preprint arXiv:1412 6572 [Google Scholar]
[79] Saalman, Lora. “Fear of False Negatives: AI and China’s Nuclear Posture.” Bulletin of the Atomic Scientists. 24 April 2018 [Google Scholar]
[80] James, A. Russell. 2010. Innovation, Transformation, and War: Counterinsurgency Operations in Anbar and Ninewa Provinces, Iraq, 2005–2007. Stanford, Calif.: Stanford University Press. [Google Scholar]
[81] Kamo, M, Yokomizo, and H. Yokomizo. 2015. “Explanation of non-additive Effects in Mixtures of a Similar Mode of Action Chemicals.” Toxicology 1 (335): 20–26. doi:https://doi.org/10.1016/j.tox.2015.06.008. [Crossref], [Google Scholar]
[82] Boghosian, Bruce M., P V. Coveney, and H. Wang 2019. “A New Pathology in the Simulation of Chaotic Dynamical Systems on Digital Computers.” Advanced Theory and Simulations 2 (12): 1–8.
[83] Deeks, Ashley, Noam Lubell, and Daragh Murray. 2019. “Machine Learning, Artificial Intelligence, and the Use of Force by States,” 10:1.” Journal of National Security Law & Policy 1–25. [Google Scholar]
[84] Russell, Stuart. 2019. Human Compatible. New York: Viking Press. [Google Scholar]
Sauer, Frank, and F. Sauer. 2021. “How (Not) to Stop the Killer Robots: A Comparative Analysis of Humanitarian Disarmament Campaign Strategies.” Contemporary Security Policy 42 (1): 4–29. doi:https://doi.org/10.1080/13523260.2020.1771508. [Taylor & Francis Online], [Google Scholar]
Schmitt, Michael. 2013. “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics.” Harvard National Security Journal 4: 1–37. [Google Scholar]
[85] Watson, David. 2019. “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.” Minds and Machines 29 (3): 417–440. doi:https://doi.org/10.1007/s11023-019-09506-6. [Crossref], [Web of Science ®], [Google Scholar]
[86] Richard, K. Betts. 2007. Enemies of Intelligence: Knowledge and Power in American National Security. New York: Columbia University Press. [Google Scholar]
[87] Lieber, Keir. 2000. “Grasping the Technological Peace: The Offense-Defense Balance and International Security.” International Security 25 (1): 71–104. doi:https://doi.org/10.1162/016228800560390. [Crossref], [Web of Science ®], [Google Scholar]
[88] Acton, ames M., et al., eds. 2017. Entanglement: Russian and Chinese Perspectives on Non-Nuclear Weapons and Nuclear Risks. Washington, DC: Carnegie Endowment for International Peace. [Google Scholar]
Morgan, Forrest E., et al. 2008. Dangerous Thresholds: Managing Escalation in the 21st Century. Santa Monica, CA: RAND Corporation. [Google Scholar]
Dobos, Ned. 2020. Ethics, Security, and the War-Machine: The True Cost of the Military. Oxford: Oxford University Press. [Crossref], [Google Scholar]
[89] Russell, Stuart. ““2021 Reith Lectures 2021: Living with Artificial Intelligence,” 2021. [Google Scholar]
[90] US Department of Defense, “Summary of the Joint All-Domain Command and Control (JADC2) Strategy,” March 2022, https://media.defense.gov/2022/Mar/17/2002958406/-1/-1/1/SUMMARY-OF-THE-JOINT-ALL-DOMAIN-COMMAND-AND-CONTROL-STRATEGY.PDF [Google Scholar]
[91] Johnson, James. 2018. “China’s Vision of the Future network-centric Battlefield: Cyber, Space and Electromagnetic Asymmetric Challenges to the United States.” Comparative Strategy 37 (5): 373–390. doi:https://doi.org/10.1080/01495933.2018.1526563. [Taylor & Francis Online], [Google Scholar]
Other countries, including the UK (e.g. The British Army’s Project Theia), France (e.g. the French Air Force’s Connect@aero program), and NATO (e.g. IST-ET-113 Exploratory Team) have also started developing and testing AI technology to support C2 and military decision-making; these efforts, however, remain limited in scale and scope. “Unsupervised machine learning in the military domain,” NATO Science & Technology Organization, 27 May 2021, https://www.sto.nato.int/Lists/STONewsArchive/displaynewsitem.aspx?ID=642.; and “Digging deeper into THEIA,” The British Army, 8 July 2021, https://www.army.mod.uk/news-and-events/news/2021/07/digging-deeper-into-theia/.; and Philippe Gros, “The ‘tactical cloud,’ a key element of the future combat air system,” Fondation Pour La Recherche Stratégique, no. 19:19, 2 October 2019.
[92] Yan, Ke, Yang Kuo, and Shi Hongbo. 2022. “Human-on-the-Loop: The Development Trend of Intelligentized Command Systems.” PLA Daily. March 17. [Google Scholar]
[93] Brose, Christian. 2020. The Kill Chain: Defending America in the Future of High-Tech Warfare. New York, NY: Hachette. [Google Scholar]
[94] Charles, C. Krulak. “The Strategic Corporal: Leadership in the Three Block War,” Marines Magazine, January 1999. [Crossref], [Google Scholar]
[95] Nina, A. Kollars. 2015. “War’s Horizon: Soldier-Led Adaptation in Iraq and Vietnam.” Journal of Strategic Studies 38 (4): 529–553. doi:https://doi.org/10.1080/01402390.2014.971947. [Taylor & Francis Online], [Google Scholar]
[96] The judgments (i.e. outputs) of AI-ML algorithms cannot be determined in advance because it would take too long to specify all possible contingencies; thus, human judgment is required to interpret the system’s prediction to inform judgment and decision-making.
[97] Boyd, John, Strategic Game? and ? (Unpublished presentation, 1987). [Google Scholar]
[98] Johnson, James, and Eleanor Krabill. 2020. “AI, Cyberspace, and Nuclear Weapons.” War on the Rocks. January. [Google Scholar]
[99] Freedberg, Sydney, Jr. “Let Leaders Of the Electronic Leash: CSA Milley, Breaking Defense, 5 May 2017, https://breakingdefense.com/2017/05/let-leaders-off-the-electronic-leash-csa-milley/ [Google Scholar]
[100] Boyd, John, Patterns of Conflict (unpublished presentation, draft version, 1982). [Google Scholar]
[101] Peter, W. Singer. “Robots and the Rise of Tactical Generals.” Brookings. 9 March 2009 [Google Scholar]
[102] US Office of Naval Research, Data Focused Naval Tactical Cloud (DF-NTC), ONR Information Package, 24June 2014. [Google Scholar]
[103] Freedberg, Sydney, Jr. “Let Leaders Of the Electronic Leash: CSA Milley, Breaking Defense, 5 May 2017, https://breakingdefense.com/2017/05/let-leaders-off-the-electronic-leash-csa-milley/ [Google Scholar]
[104] Gros, Philippe. 2 October 2019. “The “Tactical Cloud,” a Key Element of the Future Combat Air System.” Fondation Pour La Recherche Stratégique (19): 19. [Google Scholar]
[105] John, K. Hawley. 2017. Patriot Wars: Automation and the Patriot Air and Missile Defense Systems. Washington, DC: CNAS. January. [Google Scholar]
For example, in 2003, a MIM-104 Patriot surface-to-air missile’s automated system misidentified a friendly aircraft as an adversary that human operators failed to correct, leading to the death by friendly fire of a US F-18 pilot. Other potential causes of misguided or erroneous algorithmic outputs include: open-source mis-disinformation, adversarial attacks, data-poisoning, or simply the use of outdated, malfunctioning, or biased data training sets.
[106] Fabre, Cecile. 2022. Spying through a Glass Darkly. London: Oxford University Press. [Crossref], [Google Scholar]
[107] Brewer, Marilynn B., and William D. Crano. 1994. Social Psychology. New York, NY: West . [Google Scholar]
[108] Perrow, Charles. 1999. Normal Accidents. Princeton: Princeton University Press. [Google Scholar]
[109] Lockie, Alex. “The Real Story of Stanislav Petrov, the Soviet Officer Who ‘Saved’ the World from Nuclear War.” Business Insider. 26 September 2018 [Google Scholar]
[110] Baum, Seth D., Robert de Neufville, and Anthony M. Barrett. “A Model for the Probability of Nuclear War,” Global Catastrophic Risk Institute, Global Catastrophic Risk Institute Working Paper 18–1 (March 2018), pp. 19–20. [Google Scholar]
[111] Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. [Google Scholar]
.