Knowledge in the Grey Zone: AI and Cybersecurity

Remy Maduit | Authors published

DEFENSE & SECURITY FORUM​

Knowledge in the Grey Zone
AI and cybersecurity

Tim Stevens is a Senior Lecturer in Global Security & War Studies
and Head of the King’s Cyber Security Research Group at King’s College, UK.

Volume I, Issue 1, 2022
Defense & Security Forum
a Mauduit Study Forums’ Journal
Remy Mauduit, Editor-in-Chief

Stevens, Tim (2020) Knowledge in the grey zone: AI and cybersecurity, Digital War 1, DOI: 10.1057/s42984-020-00007-w

ARTICLE INFO

Article history

Keywords
algorithms
artificial intelligence
automation
cybersecurity
grey zone
knowledge

ABSTRACT
Cybersecurity protects citizens and society from harm perpetrated through computer networks. Its task is made even more complex by the diversity of actors—criminals, spies, militaries, hacktivists, firms—operating in global information networks so that cybersecurity is intimately entangled with the so-called grey zone of conflict between war and peace in the early twenty-first century. To counter cyber threats from this environment, cybersecurity is turning to artificial intelligence and machine learning (AI) to mitigate anomalous behaviors in cyberspace. This article argues that AI algorithms create new modes and sites for cybersecurity knowledge production through hybrid assemblages of humans and non-humans. It concludes by looking beyond ‘every day’ cybersecurity to military and intelligence use of AI and asks what is at stake in the automation of cybersecurity.

Cybersecurity in the grey zone

Discussions of ‘digital war’ quickly resolve to an understanding that global digital connectivity has disturbed our orthodox understandings of war and peace. In the so-called grey zone [1] below the threshold of armed conflict, a diverse and fluid cast of actors attempts to derive political, strategic, and economic gain through the ever-proliferating assemblage of information communications technologies. Even if it is a step too far to label many of these activities—political activism, crime, espionage, subversion—as ‘war’ or ‘warlike’, it is not unreasonable to characterize the wider environment of cybersecurity as one of persistent grey-zone conflict. [2] At the strategic level, for instance, suitably equipped states view ‘cyber’ as a coercive tool worth exploiting for national gain, be it in military operations, intelligence-gathering, or commercial cyber espionage. [3] As most of this occurs below the level of conventional conflict, we saw it as a way of limiting conflict escalation, albeit always with the potential to generate just the opposite. [4]

The twists and turns of operations in this digital domain may be surprising and their effects—since causality can ever be demonstrated—remarkable, but the vectors of their propagation are familiar, omnipresent, and quotidian, including the global internet and the myriad consumer electronics that share and shape our lives. So too the practices of cybersecurity, which, for all the justifiable interest in ‘cyberwarfare’ and ‘cyber espionage’, constitute the ‘every day’ of digital conflict in the grey zone. In the daily exchanges of technical cybersecurity, there is a constant to-and-fro between network defenders and attackers. Cybersecurity practitioners wring the most out of existing technologies to wrest the upper hand from their adversaries, whether the humblest hacker or the best-resourced state entity. Inevitably, however, they also seek to employ innovative technologies as the environment develops. In tune with the zeitgeist, artificial intelligence and machine learning (AI) are proposed as an essential ‘solution’ to many cybersecurity threats.

The following discussion addresses the identification of cyber threats through AI and algorithmic means and shows a shift from signature- to anomaly-based approaches to threat detection and mitigation. It then moves to the specific application of AI to anomaly detection, particularly about the construction of ‘normal’ organizational ‘patterns of life’. The epistemic implications of this technical orientation are then unpacked, specifically how AI creates new sites of knowledge production through new human-nonhuman subjectivities. This brief exploration concludes by looking beyond everyday cybersecurity to military and intelligence ‘cyber’ and asks what is at stake in the automation of cybersecurity.

From signatures to anomalies

Cybersecurity cannot proceed without data. Access to and protection of data are core practical considerations for cybersecurity professionals. As more computing devices become networked in the ‘Internet of Things’ and data volumes—at rest and in transit—continue to grow, the ‘attack surface’ of the global information environment expands continuously. Understood as the totality of weak points exploitable by nefarious actors, the attack surface is less a physical boundary to be defended than a logical membrane of potential vulnerability distributed in space and time. Preventing the exploitation of flaws in this integument, through which malicious code can be inserted or valuable data extracted, is a near-impossible task, not least as vulnerabilities are inherent in information systems. As Libicki notes, there is ‘no forced entry in cyberspace. [5] Whoever gets in enters through pathways produced by the system itself’. The sheer diversity and growth of this environment compound this ontological insecurity so that defense can never be perfect and attackers seem always to be one step ahead. Cybersecurity is about making visible those vulnerabilities and threats to counter them, an informational challenge that relies on timely and intelligible data for interpretation and effective use in prevention and remediation.

The collection and filtering of data about the status of information systems and threats to their intended functioning have long been largely automated. Humans do not have the cognitive or sensory capacity to cope with the enormous data volumes produced by software and hardware dedicated to alerting systems administrators to problems inside and outside their networks. Add to this a human capital shortfall in the cybersecurity industry and automation is a reasonable technical fix for some of these problems. [6] Cybersecurity vendors, for example, offer thousands of different products for automated malware detection. These software packages inspect network traffic and match data packets to known signatures of malicious software (malware) that install themselves on users’ machines and act in deleterious ways. Now much more than just ‘anti-virus’ protection, these software packages also detect and deny worms, Trojans, spyware, adware, ransomware, rootkits, and other software entities searching for and exploiting digital vulnerabilities. Automated in this fashion, it can repel malware, quarantine, destroy, or otherwise prevent from infecting a system and opening it up for exploitation. Effective as this has been in the past, new types of polymorphic and metamorphic malware, which change their ‘appearance’ every few seconds, can evade traditional techniques of ‘signature-based’ detection that rely on existing knowledge of malware species. You cannot defend against these rapidly mutating threats if they have no signature matches in databases that are inevitably always out of date.

Signature-based detection is still effective against known malware, but technical cybersecurity is developing ‘anomaly-based’ capabilities that compare network traffic to continuously revised baselines of ‘normal’ behavior to detect intrusions, analyze malware and detect spam and phishing attacks. Traffic classified as legitimate against a baseline is permitted across network boundaries, but it will detect automatically and flag unusual behaviors for further investigation, including by human analysts. Yet, as the number of alerts triggered increases, many requiring human attention, we are thrown back again to the limits of organizational and cognitive capacity. The persistent creation of false negatives exacerbates this situation (inappropriate behavior treated as good) and false positives (good behavior labeled as bad), which further drains resources. [7] One response has been to add a new layer of automation to the parsing and analysis of these ‘big data’ sets, using emerging technologies of AI to process and interpret anomalous behavior and feed it back into cybersecurity decision-making processes in ever-shortening cycles of action and reaction. Cybersecurity and AI have not always been obvious bedfellows. In the early days of each field, computer security researchers viewed with some distrust the relative unpredictability of AI system behaviors, their multiple outputs anathema to the order sought by software engineers. [8] Now, the possibility that AI can learn and adapt in dynamic ways makes it more attractive for countering similarly adaptive cybersecurity threats.

The promise of AI

Using AI in cybersecurity is not entirely new. If we understand ‘AI’ as a placeholder for a set of algorithmic machine learning heuristics rather than a new form of ‘intelligence’, cybersecurity has been alive to its potential since at least the 1990s. Early applications include the filtering of spam email using Bayesian logic and the classification of spam by large neural networks. [9] AI has increased significantly in sophistication since then, and adversaries’ use of those same technologies has partly occasioned its development. Learning algorithms fuel the evolution of the aforementioned polymorphic and metamorphic malware, for instance. They have also detected vulnerabilities in target systems and assisted in their exploitation and subversion: the cybersecurity industry is deeply worried by the future deployment of ‘adversarial’ or ‘offensive’ AI, a concern shared with governments and military and intelligence agencies. [10] In this respect, AI deployments are part of an emerging ‘arms race’ between attackers and defenders. [11] It does not lose this message on enterprises, 61% of which reported in 2019 that they could not detect network breaches without AI technologies and 69% believed AI would be necessary to respond to those breaches. [12]

One notable response to this market demand has been the development of cybersecurity AI platforms that claim to mimic natural ‘immune systems’, borrowing from a long analogical association between cybersecurity and notions of public health and epidemiology. [13] Companies promote anomaly-based AI as a way of learning what ‘normal’ means for an organization and then exploring unusual behaviors. The tolerance of the AI engine to anomalous behaviors can then be adjusted, based on the risk appetite of the organization and its resourcing levels. [14] The ‘immune system’ metaphor plays out in a platform’s putative capacity to detect, root out, and inoculate against further suspicious behaviors. In one firm’s account of its ‘Enterprise Immune System’ package, ‘ [by detecting subtle deviations from the organization’s “pattern of life”, it can distinguish friend from foe—and highlight true cyber threats or attacks that would otherwise go unnoticed’ (Darktrace n.d.). These vendors claim another benefit for potential clients, in that they free up analysts’ time to focus on more important tasks, instead of being mired in ‘low-priority or benign events from the start’. [15] This is a recognition of the frequency of false negatives and positives—although AI promises to reduce their incidence by learning what is ‘normal’ over time—but is also a legitimate comment on the exigencies of life in modern security operations centers. They bombard these with security reports, whose importance at first glance can be difficult to discern. Automated AI systems can handle the bulk of low-level threat detection and response, allowing analysts to tackle high-level behaviors and incidents instead. This includes developing profiles of the human actors behind sophisticated cyber campaigns, such as so-called advanced persistent threats (APTs), to tailor deterrence and response regimes. A further promising avenue of research is to capture analysts’ real-life behaviors and feed this data back into learning algorithms as a way of shaping AI responses to cybersecurity incidents. [16]

AI systems learn about the threat environment within and without the notional perimeters of organizations. This includes employees and other people interacting with those networks: clients, customers and government agencies, citizens, and service users. Monitoring these individuals allows AI to establish an organization’s internal rhythms and patterns, a normal baseline against which anomalies can be registered, such as ‘insider threats’ posed by disgruntled employees, leakers, and whistle-blowers. [17] AI-powered behavioral analytics might detect, for instance, unusual employee activities at odd times of the night or unexpected spikes in transfers of sensitive commercial data or intellectual property. Combined with multiple other data streams, I develop a picture of anomalous behavior within an organization’s immediate purview organization, leading to further investigation and censure. Using AI to predict human rather than just machine behavior is an important contributor to ‘sociotechnical’ framings of cybersecurity that address how technology, users, and processes interact in complex organizational systems. [18] An uneasy tension exists in this field between designing computer networks that enhance the human security of end-users [19] and using technology to discern anomalous behaviors, a perspective that perceives all end-users as potential vectors of threat. [20]

AI, anomalies, and episteme

The search for anomalies shifts the emphasis of cybersecurity from known threats to the prolepsis of as-yet-unknown threats and into an anticipatory posture that has received much attention in the critical security literature. [21] It also suggests a more inductive approach to the creation of useful cybersecurity knowledge from extensive data sets. Driven by AI, anomaly-based cybersecurity practices search for patterns, associations, and correlations that signature analysis can not identify solely. The latter is understood as a form of deductive reasoning. In part, this channels the infamous claim made of big data analytics that they make the hypothetico-deductive method ‘obsolete’. [22] As critics of this position observe, all inquiry, big data analytics included, instantiates many theoretical assumptions and biases. [23] In AI-driven machine learning, parameters, and data human programmers often set categories in advance. This ‘supervised learning’ differs from ‘unsupervised’ learning techniques, in which algorithms do the labeling and categorization themselves and might therefore be ‘unbounded from the bias of specific localities’. [24] The quality of training data is therefore key to determining output quality, and supervised machine learning can give rise to outcomes that reproduce programmers’ biases. [25] In cybersecurity, the assembling of ‘cyber threat intelligence’ (CTI) from multiple data streams is prone to various biases, as analysts are swayed by groupthink, external social media and reporting, seduction by the power of expertise, and their personal beliefs and prejudices; responsible CTI cannot, therefore, be the preserve of machines alone. [26] These everyday decisional acts make up ‘little security nothings’ that have even greater import in the aggregate than in their existence alone. [27]

As Aradau and Blanke identify in their critique of ‘algorithmic rationalities’, the hunt for anomalies in big data security environments subtly alters the logic of security in three specific ways. [28] Referring to intelligence and counter-terrorism, rather than cybersecurity, they note first how advanced analytics is, as outlined above, a response to the increased data volumes of interest to security practitioners. Anomaly-based analytics are geared, therefore, to the practical requirement for ‘actionable information’, rather than ‘good or truthful information’, which unravels claims of ‘algorithmic objectivity’ and shows ‘how uncertainty is radically embedded within algorithmic reasoning’. [29] The lack of standardized data collection and classification procedures, which complicates comparisons between data sets, compounds this. Second, their divergence from normality, which is defined as similarity between data points rather than predetermined categories, constructs anomalies. ‘Otherness’ is expressed as ‘dots, spikes, or nodes’ in geometric or topological space that are anomalous in their dissimilarity to the ‘normal’ so constructed. Whilst this may seem to avoid the questions of bias previously noted, this unmooring from negative categorizations presents a political problem: how to ‘reconnect techniques of producing dots, spikes and nodes with vocabularies of inequality and discrimination’?. [30] Third, stable categories—as in statistical analyses—give way to ‘continuous calculations of similarity and dissimilarity’, refusing direct political contestation, particularly as those calculations ‘remain invisible, often even to the data analysts themselves’. [31] Hidden within the archetypal ‘black box’ of technology, defined in terms of inputs and outputs only [32], the computation of AI technologies is frequently unknowable and therefore radically opaque. [33]

The complexity, uncertainty, and lack of transparency of anomaly-based AI technologies in cybersecurity and elsewhere raise questions about AI trust, safety, and control. The postulated solution is to double down on AI as an engineering solution to these problems or to figure out precisely where humans should be regarding the ‘loop’ of AI decision-making. In lethal autonomous weapons systems and the use of AI in war, the question has become: should humans be in, on, or out of this loop? [34] In cybersecurity, as in war, analysts and operators are enmeshed in hybrid human-machine assemblages that generate knowledge of particular utility in specific decision-making circumstances. We refer to the production of cyber threat intelligence in terms of hybrid configurations of people, data, machines, and organizations, tasked with meeting specific operational and strategic demands. [35] Srnicek captures this dynamic when he analyses the ‘delegation of thought’ to machines in ‘cognitive assemblages’ of sociotechnical composition. [36] In these epistemic loci, heterogeneous human and nonhuman actors, including algorithms, contribute to the creation of expert knowledge, which is ‘collective and distributed rather than individual or solely social’. [37] They establish ‘collaborations of influence’ that have agency in how and what kind of knowledge is produced. [38] Are we ‘in the loop’ if our choices are between paths of action relayed by machines? [39] If unsupervised learning systems develop their questions and goals, what becomes of the human at all? The traditional mantra that cybersecurity is a ‘team sport’ is better read as a ‘dance of agency’ once this relational process of delegation and complex agency in AI-rich environments takes hold. [40]

Beyond the everyday

The foregoing review has addressed the everyday cybersecurity necessary to maintain the ordinary functioning of the information systems on which daily life depends. AI has an important role in this diverse landscape, automating the mundane, learning from its environment, and promoting better protection for users, organizations, and societies. However, as alluded to above, the role of algorithms in creating and sustaining specific socioeconomic and political arrangements is never neutral. [41] AI-driven anomaly detection creates new subjectivities in the grey zone of modern informational conflict and risks sacrificing ‘truth’ for ‘utility’ and ‘convenience’ in the managerial practices of organizational cybersecurity. This radical contingency creates political problems, as AI substitutes for human cognition as the arbiter of network decision-making and regulation of data flows. Anomalies may escape easy mapping to establish groupings of people and behaviors, but there are multiple examples of the insidious effects of this computational logic. Tools of data manipulation similar to those in AI-fuelled cybersecurity are easily turned into economic and political projects of worrisome scale and impact. From the ‘behavioral futures markets’ of surveillance capitalism [42] to the Chinese social credit system [43], from the excesses of domestic surveillance [44] to online information operations in a post-truth era, algorithms make up new modalities and mediators of political influence and control. [45] Whilst we have few grounds for assuming that AI is inferior to human intelligence [46], questions of knowledge, power, and subjectivity are intimately bound up with these algorithmic rationalities. [47]

AI and cybersecurity are also far from neglected in the realms of secret intelligence and military operations. AI is a sine qua non for modern militaries [48] and a potent augmentation to strategic planning. [49] Without AI in cyberspace, the former head of US Cyber Command remarked in Senate testimony, ‘you are always behind the power curve’. [50] Militaries and intelligence agencies are investing heavily in AI-enabled cyber, including the US Department of Defense’s new Joint Artificial Intelligence Centre and many other initiatives leveraging AI to generate an operational and strategic effect in and through cyberspace. [51] The UK is developing ‘human-machine teaming’ in AI-rich environments including cyber, to promote ‘the effective integration of humans, artificial intelligence (AI) and robotics into war-fighting systems… that exploit the capabilities of people and technologies to outperform our opponents’. [52] Known in the USA as ‘centaur warfighters’, these hybrids will ‘leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence’. [53] This continues a long lineage of hybrid military subjectivities [54]), oriented now to predictive machine cognition, providing decision advantage to the new military operations of ‘cognitive maneuver’ in global information space. [55] These hybrid entities are not just network defenders but agents of cyber warfare and cyber espionage. As the US adopts new doctrines of ‘persistent engagement’ and ‘defending forward’ in cyberspace, they will be on the front lines of digital warfare and intelligence, with as-yet-unknown autonomy in decision-making and target discrimination. [56] In this assemblage, ‘we may never know if a decision is a decision… or if it has been “controlled by previous knowledge” and “programmed”’. [57]

Militaries and intelligence agencies join firms, citizens, criminals, hacktivists, and others using AI-enabled cyber tools in the complex battle spaces of the grey zone. Technical cybersecurity exploits AI to overcome constraints on human cognition, speed decision-making and provide better protection to organizations and end-users of information systems. It prioritizes operational efficiency over truth and creates new subjectivities and targets of intervention. Algorithms and automation also create new sites of knowledge production whilst simultaneously obfuscating the precise calculations that lead to particular outputs. This draws attention to problematic aspects of the delegation of thought to machines because it portends a decrease in opportunities for meaningful oversight and regulation. The stakes could not be higher in military cyber targeting or digital intelligence—strategic cybersecurity—by which standards the use of AI in everyday cybersecurity is a straightforward proposition. The role of AI in cybersecurity, a core modality of offense-defense dynamics in the grey zone, remains open to contestation, modulating as it does the global flows of data in our nominal ‘information age’. In considering AI in cybersecurity and elsewhere, perhaps we should revisit Lewis Mumford’s) provocation: ‘Unless you have the power to stop an automatic process, you had better not start it’. [58] Where is the agency in the new cybersecurity assemblage and who or what makes the decisions that matter?


[1] Wirtz, James J. 2017. Life in the ‘gray zone’: Observations for contemporary strategists. Defense and Security Analysis 33
(2): 106–114.

[2] Corn, Gary P. 2019. Cyber national security: Navigating gray-zone challenges in and through cyberspace. In Complex Battlespaces: The Law of Armed Conflict and the Dynamics of Modern Warfare, ed. Christopher M. Ford and Winston S. Williams, 345–428. New York: Oxford University Press.

[3] Brantly, Aaron F. 2018. The Decision to Attack: Military and Intelligence Decision-Making. Athens: University of Georgia Press.
Valeriano, Brandon, Benjamin Jensen, and Ryan C. Maness. 2018. Cyber Strategy: The Evolving Character of Power and Coercion. New York: Oxford University Press.

[4] Brands, Hal. 2016. Paradoxes of the gray zone. Foreign Policy Research Institute, 5 February. https://www.fpri.org/article/2016/02/paradoxes-gray-zone/.
Schneider, Jacquelyn. 2019. The capability/vulnerability paradox and military revolutions: Implications for computing, cyber, and the onset of war. Journal of Strategic Studies 42 (6): 841–863.

[5] Libicki, Martin C. 2009. Cyberdeterrence and Cyberwar. Santa Monica: RAND Corporation.

[6] Shires, James. 2018. Enacting expertise: Ritual and risk in cybersecurity. Politics and Governance 6 (2): 31–40.

[7] Libicki, Martin C., Lillian Ablon, and Tim Webb. 2015. The Defender’s Dilemma: Charting a Course Toward Cybersecurity. Santa Monica: RAND Corporation.

[8] Landwehr, Carl E. 2007. Cybersecurity and artificial intelligence: From fixing the plumbing to smart water. IEEE Security and Privacy 6 (5): 3–4.

[9] Brunton, Finn. 2013. Spam: A Shadow History of the Internet. Cambridge, MA: MIT Press.

[10] Brundage, Miles and 25 others. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. Oxford: Future of Humanity Institute and 13 others. https://maliciousaireport.com/.

[11] Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton.

[12] Capgemini. 2019. Reinventing Cybersecurity with Artificial Intelligence: A New Frontier in Digital Security. https://www.capgemini.com/research/reinventing-cybersecurity-with-artificial-intelligence/. Accessed 8 Jan 2020.

[13] Betz, David, and Tim Stevens. 2013. Analogical reasoning and cyber security. Security Dialogue 44 (2): 147–164.
Weber, Steven. 2017. Coercion in cybersecurity: What public health models reveal. Journal of Cybersecurity 3 (3): 173–183.

[14] Wilkinson, Dick. 2019. The future of AI in cybersecurity. CISO Mag, 29 October. https://www.cisomag.com/the-future-of-ai-in-cybersecurity/.

[15] Darktrace. n.d. Cyber AI platform. https://www.darktrace.com/en/technology/. Accessed 1 Jan 2020.

[16] Bresniker, Kirk, Ada Gavrilovska, James Holt, Dejan Milojicic, and Trung Tran. 2019. Grand challenge: Applying artificial intelligence and machine learning to cybersecurity. Computer 52 (12): 45–52.

[17] Wall, David S. 2013. Enemies within: Redefining the insider threat in organizational security policy. Security Journal 26 (2): 107–124.

[18] Coles-Kemp, Lizzie, and René Rydhof Hansen. 2017. Walking the line: The everyday security ties that bind. In Human Aspects of Information Security, Privacy and Trust, ed. Theo Tryfonas, 464–480. Cham: Springer.

[19] Id.

[20] Schneier, Bruce. 2016. Stop trying to fix the user. IEEE Security and Privacy 14 (5): 96.

[21] Amoore, Louise. 2013. The Politics of Possibility: Risk and Security Beyond Probability. Durham, NC: Duke University Press.

[22] Anderson, Chris. 2000. The end of theory: The data deluge makes the scientific method obsolete. Wired, 23 June. https://www.wired.com/2008/06/pb-theory/.

[23] Kitchin, Rob. 2014. Big data, new epistemologies and paradigm shifts. Big Data and Society. https://doi.org/10.1177/2053951714528481.

[24] Parisi, Luciana. 2019. Critical computation: Digital automata and general artificial thinking. Theory, Culture and Society 36 (2): 89–121.

[25] McDonald, Henry. 2019. AI expert calls for end to UK use of ‘racially biased’ algorithms. The Guardian, 12 December. https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey.

[26] Collier, Jamie. 2019. A threat intelligence analyst’s guide to today’s sources of bias. Digital Shadows, 5 December. https://www.digitalshadows.com/blog-and-research/a-threat-intelligence-analysts-guide-to-todays-sources-of-bias/.

[27] Husymans, Jef. 2011. What’s in an act? On security speech acts and little security nothings. Security Dialogue 42 (4–5): 371–383.

[28] Aradau, Claudia, and Tobias Blanke. 2018. Governing others: Anomaly and the algorithmic subject of security. European Journal of International Security 3 (1): 1–21.

[29] Id.

[30] Id. 20.

[31] Id., 21.

[32] Von Hilgers, Philipp. 2011. The history of the black box: The clash of a thing and its concept. Cultural Politics 7 (1): 41–58.

[33] Castelvecchi, Davide. 2016. Can we open the black box of AI? Nature 538: 20–23.

[34] Jensen, Benjamin M., Christopher Whyte, and Scott Cuomo. 2019. Algorithms at war: The promise, perils, and limits of artificial intelligence. International Studies Reviewhttps://doi.org/10.1093/isr/viz025.
Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton.

[35] Jasper, Scott E. 2017. US cyber threat intelligence sharing frameworks. International Journal of Intelligence and Counterintelligence 30 (1): 53–65.

[36] Srnicek, Nick. 2014. Cognitive assemblages and the production of knowledge. In Reassembling International Theory: Assemblage Thinking and International Relations, ed. Michele Acuto and Simon Curtis, 40–47. Basingstoke: Palgrave Macmillan.

[37] Id.
Hayles, N. Katherine. 2016. Cognitive assemblages: Technical agency and human interactions. Critical Inquiry 43 (1): 32–55.

[38] Kaufmann, Mareile. 2019. Who connects the dots? Agents and agency in predictive policing. In Technologies and Agency in International Relations, ed. Marijn Hoijtink and Matthias Leese, 141–163. London: Routledge.

[39] Weber, Jutta, and Lucy Suchman. 2016. Human-machine autonomies. In Autonomous Weapons Systems: Law, Ethics, Policy, ed. Nehal Bhuta, Susanne Beck, Robin Geiß, Hin-Yan Liu, and Claus Kreß, 75–102. Cambridge: Cambridge University Press.

[40] Pickering, Andrew. 2010. Material culture and the dance of agency. In The Oxford Handbook of Material Culture Studies, ed. Dan Hicks and Mary C. Beaudry, 191–208. Oxford: Oxford University Press.

[41] Kitchin, Rob. 2017. Thinking critically about and researching algorithms. Information, Communication and Society 20 (1): 14–29.
Berry, David M. 2019. Against infrasomatization: Towards a critical theory of algorithms. In Data Politics: Worlds, Subjects, Rights, ed. Didier Bigo, Engen Isin, and Evelyn Ruppert, 43–63. London: Routledge.

[42] Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.

[43] Matsakis, Louise. 2019. How the West got China’s social credit system wrong. Wired, 29 July. https://www.wired.com/story/china-social-credit-score-system/.

[44] Bauman, Zygmunt, Didier Bigo, Paulo Esteves, Elspeth Guild, Vivienne Jabri, David Lyon, and R.B.J. Walker. 2014. After Snowden: Rethinking the impact of surveillance. International Political Sociology 8 (2): 121–144.

[45] Woolley, Samuel C., and Philip N. Howard (eds.). 2018. Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. New York: Oxford University Press.

[46] Collins, Jason. 2019. Principles for the application of human intelligence. Behavioral Scientist, 30 September. https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/.

[47] Aradau, Claudia, and Tobias Blanke. 2018. Governing others: Anomaly and the algorithmic subject of security. European Journal of International Security 3 (1): 1–21.

[48] Lewis, Larry. 2019. Resolving the battle over artificial intelligence in war. RUSI Journal 164 (5–6): 62–71.

[49] Payne, Kenneth. 2018. Artificial intelligence: A revolution in strategic affairs? Survival 60 (5): 7–32.

[50] Congressional Research Service. 2019. Artificial Intelligence and National Security. Washington, DC. https://crsreports.congress.gov/product/pdf/R/R45178. Accessed 8 Jan 2020.

[51] Corrigan, Jack. 2019. Pentagon, NSA prepare to train AI-powered cyber defenses. Defense One, 4 September. https://www.defenseone.com/technology/2019/09/pentagon-nsa-laying-groundwork-ai-powered-cyber-defenses/159650/.

[52] Ministry of Defence. 2018. Human–Machine Teaming. Joint Concept Note 1/18. Shrivenham: Development, Concepts and Doctrine Centre.

[53] Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton.

[54] Coker, Christopher. 2013. Warrior Geeks: How 21st Century Technology is Changing the Way We Fight and Think About War. London: Hurst.

[55] Dear, Keith. 2019. Artificial intelligence and decision-making. RUSI Journal 164 (5–6): 18–25.

[56] Healey, Jason. 2019. The implications of persistent (and permanent) engagement in cyberspace. Journal of Cybersecurity. https://doi.org/10.1093/cybsec/tyz008.

[57] Amoore, Louise, and Marieke de Goede. 2008. Transactions after 9/11: the banal face of the preemptive strike. Transactions of the Institute of British Geographers 33 (2): 173–185.

[58] Mumford, Lewis. 1964. The automation of knowledge. AV Communication Review 12 (3): 261–276.

.

.

Scroll to Top