Introduction
Hostile cyber operations have been growing exponentially in both number and sophistication; ranging from those conducted by non-state actors to state-backed cyber attacks. Two of the most high-profile operations in the last few years were the WannaCry ransomware, which affected more than 200,000 computers in 150 countries, and NotPetya, considered the costliest cyber attack in history with an estimated loss of 10 billion dollars.Footnote 1 In both incidents, a self-propagating malicious software (malware) spread itself automatically among information systems, encrypting data on vulnerable computers and demanding ransom payments. As such, both incidents raised scholarly and policy concerns about the unintended consequences of cyber attacks. Although WannaCry was not specifically targeted at the health sector, the National Health Services (NHS) in the UK was significantly hit by it, leading to widespread discussions on the importance and complexities of cybersecurity in healthcare.Footnote 2 On the other hand, NotPetya was not primarily financially motivated; it was allegedly launched by the Russian government in order to disrupt financial, energy, and government institutions in Ukraine.Footnote 3 As argued by Ciaran Martin, the former head of the UK's National Cyber Security Centre (NCSC): ‘WannaCry and NotPetya were deliberate attacks, but their impact on the UK and allied countries was accidental. So the two biggest incidents that we faced early on [at the NCSC] were both basically accidents.’Footnote 4 These two examples, thus, showcase the significance of autonomous cyber attacks that, even if targeted, may spread in ways that are unpredictable by their initiators, causing haphazard disruption at scale and putting individuals and businesses at the forefront of geopolitical conflicts.Footnote 5
Importantly, these and other examples challenge the anthropocentric theoretical approaches to the study of cybersecurity, which tie the capacity to act to human subjectivity and overlook the role of the non-human in co-constructing its own (in)security. This anthropocentrism is reflected in the considerable number of literature that approaches the sociopolitical construction of cybersecurity as a function of human discourses and threat representations through linguistic and discursive analysis.Footnote 6 Although these studies generate a conceptually far more sophisticated approach than the overwhelmingly policy-oriented cybersecurity literature, they do not sufficiently capture the non-discursive materiality of cyber incidents that goes beyond human agency and rhetoric. If security is assumed to be discursively constructed, and if discourse is a function of the human actor, then the ability to act and influence security ultimately resides in humans. Such an approach is insufficient in studying the complex operations of cyber incidents in which the agency of malware challenges human intentionality and control, and ultimately produces unpredictable and unintended consequences, as in the case of WannaCry and NotPetya. To fill this gap, more recent scholarly contributions have shifted from human discourses towards a study of materiality in analysing cybersecurity practices.Footnote 7 They investigate the complex configurations of cybersecurity assemblages through which cybersecurity is ‘made’,Footnote 8 as well as the performative influences of cyber-incidents as actants or political agents, per se.Footnote 9
This article contributes to the emerging literature that engages with the materialities of cybersecurity by analysing the agency of information in co-producing the logics and politics of cybersecurity. It focuses specifically on codes/software, as syntactic manifestations of information, and conceptualises them as informational agents with generative and agential properties that go beyond mere instrumentalisation in the construction of cybersecurity.Footnote 10 In so doing, the article explores how, even if initially given their agency by humans, codes/software can subsequently change such agency in execution and also lend agential roles back to both humans and material objects. Building upon recent scholarship on the philosophy of information and software studies, the article investigates the complexities of codes/software, their self-organising capacities, and their autonomous properties to develop an understanding of cybersecurity as emergent security. Emergence is a key concept in complexity theory and the study of self-organising systems. It illuminates the inherent unpredictability of complex informational systems and the elements of novelty associated with their operations. The article introduces ‘emergence’ as a non-linear logic that captures the agential capacities of information and the uncertainties they engender in cybersecurity. As will be shown, the logic of emergence and emergent security challenge the idea of human control in cybersecurity in two ways: by undermining the centrality of human intentionality as a basis for constructing enmity, and by acknowledging the role of codes/software in co-producing the subjects/objects of cybersecurity.
To substantiate these arguments, the article unfolds in three sections. The first section problematises the concept of agency in the theoretical cybersecurity literature, particularly ones that apply discursive and linguistic approaches. It explains the article's contribution to the new materialist and posthumanist debates on the agency of non-human ‘things’ in critical security studies by arguing that information too – specifically in its syntactic manifestation of codes/software – possess a distinctive agency. The second section examines the conceptualisation of agency in technical literature, such as software studies and computer science, to distinguish the agency of codes/software from that of ordinary matter or other non-human things. It goes on to analyse the agential capacities of codes/software, which malware as the ultimate cyber weapon is a prominent example of, and demonstrates elements of autonomy and unpredictability in their operation. Based on an understanding of the peculiar agential capacities of codes/software, the third section conceptualises cybersecurity as emergent security, in which codes/software influence human actancy and agency. It particularly analyses this logic of emergence in light of the construction of enmity and the co-production of subjects and objects in cybersecurity discourses and practices, and explains what this non-human reading does to the politics of cybersecurity.
1. Discourse, materiality, and the non-human in cybersecurity
Technologies and sociotechnical structures are often instrumentalised and their agency is reduced to the passive mediation of human subjectivity and the immaterial representation of human desires. They are frequently viewed in utopian terms when they obey human orders and perform the tasks they are designed for, and in dystopian terms when they do not.Footnote 11 In cybersecurity research, this instrumentalisation can be primarily found in studies that employ discursive and linguistic approaches to theorising cybersecurity. A prominent example in this regard is literature that build upon the Copenhagen School's securitisation theory in studying cybersecurity as a speech act or a discourse in which a human securitising actor presents a threat as existential to a particular referent object and thus requiring emergency measures to ensure that object's survival.Footnote 12 Although originally such studies focused on the US as a case study, more recent contributions are calling for applying securitisation theory to cybersecurity in ‘the non-West’,Footnote 13 such as Singapore, Japan, and Egypt.Footnote 14
Related to the cyber securitisation literature is a wide range of studies that use a discursive methodology to explore how cybersecurity discourses, utterances, and threat representations are different from other sectors. They note that cybersecurity discourses operate in the absence of a minimum level of agreement on the nature of threats, and sometimes with no empirical evidence of attacks to justify them. That is why such discourses mostly rely on symbolisations, by drawing comparisons between cyber threats and other conventional ones.Footnote 15 Added to this is the biologisation of technology and the use of ‘viruses’ and ‘worms’ metaphors;Footnote 16 the spatial analogies of cyber ‘space’;Footnote 17 and the use of fear-based hypothetical cyber-doom scenarios.Footnote 18 Lene Hansen and Helen Nissenbaum's theorisation of cybersecurity as a distinct security sector by demonstrating its unique ‘security grammars’ is one notable contribution in this regard.Footnote 19
All such contributions are important for establishing a dialogue between the relatively new field of cybersecurity and the long-established theories of security, particularly given that, to date, the majority of cybersecurity literatures remain policy-oriented in nature and tend to be conceptually under-theorised.Footnote 20 However, focusing only on speech acts overlooks questions of materiality and agency. As put by Daniel Miller, ‘things that people make, make people’.Footnote 21 Though technological artefacts are human-made, they are capable of evolving in ways not necessarily envisioned by their creators, and influencing all aspects of human life, including security experiences and practices. The historical rise of information technologies and information sciences have, in fact, long challenged the centrality of human agency and enabled a discussion on the agential capacities of machines and other non-human ‘things’. While such debates on technology and agency are becoming more prevalent in other fields of security,Footnote 22 it is not adequately reflected in the study of cybersecurity; that is, the security of information technology per se and the construction of its logic(s).
There are several material realities regarding the nature of computer disruptions, their effects, and knowledge about them in technical communities that cannot be understood as part of discursive constructions alone.Footnote 23 This includes, for example, the exponential rise in the number of cyber operations through self-replicating malware that enjoy a considerable level of autonomy in execution, such as the WannaCry and NotPetya examples mentioned earlier. In fact, the very idea of computer viruses and worms – that constitute ‘the cyber weapon’ – is an exemplar of how information systems are capable of deviating from the human intentionality embedded in their design, since they were not initially designed to be used maliciously. Cyber incidents caused by malware are, in turn, major challenges to ideas of control upon which computing technologies were based. As argued by Thomas Rid, it is the dystopia of the promises of ‘cybernated’ economies, cyborgs, and cyberspace as a new parallel frontier to reality. Now, control over machines can be taken from humans, systems can be attacked and controlled distantly, and several damages can result in the form of data loss, abuse, denial of services, or even machine damaging.Footnote 24
Accordingly, a recent strand of literature has shifted towards a critical engagement with the politics of cybersecurity to unpack its materialities beyond rhetoric and linguistic representations. They approach cybersecurity as an assemblage that comprises a complex configuration of cybersecurity actors and therefore transcends the state-centric focus of linguistic approaches.Footnote 25 This includes, for example, the various alliances that make complex malware successful, be they conceptual (for example, ideas about the role of the state or cybersecurity firms); material (for example, hardware, buildings, and centrifuges); social (for example, the links between hackers, programmers, and operators); or textual (for example, news reports and coverage).Footnote 26 Cybersecurity assemblages can be also seen in analysing attribution as a ‘knowledge creation process’ performed by constantly shifting networks of actors that establish ‘truths’ about cyber incidents.Footnote 27 The role of practices in co-producing cybersecurity is also highlighted in some literature through, for example, the study of ‘transgressive practices’ by security communities that co-constitute states’ defensive strategies.Footnote 28
This article contributes to the study of the materialities of cybersecurity by advancing the discussion on non-human agency, specifically in relation to the newly-emerging literature that approaches cyber incidents and malware as ‘political actors’,Footnote 29 capable of co-constructing the ‘space’ in cyberspace,Footnote 30 and therefore co-shaping ‘the conditions of possibility for cybersecurity politics’.Footnote 31 As such, the article argues that there is more to the limitations of discursive analysis of security than disregarding contextual influences and non-state actors. Specifically, the article criticises the anthropocentric theorisation of agency in cybersecurity literature; that is, tying the capacity to act to human subjectivity and disregarding the role of the non-human in co-constructing security. That is to say, analysing the sociopolitical construction or the ‘making’ of cybersecurity cannot be sufficiently done if all the subjects of analysis are humans and if non-human things are only considered as ‘facilitating conditions’ that lie outside the realm of agency.Footnote 32 Capturing the complexities and materialities of cybersecurity necessitates an analysis of information, particularly in its syntactic form of codes/software, not as a mere tool for human actors, but as an active actant in its own right.
New materialism and ‘information’ that matters
Language matters. Discourse matters. Culture matters. There is an important sense in which the only thing that does not seem to matter anymore is matter.Footnote 33
The idea that non-human entities like information and its syntactic manifestation in codes/software can be actants with agential influences on security construction coincides with what is often called ‘the material turn’, ‘non-human turn’, ‘thing studies’, ‘posthumanism’, or ‘new materialism’ in social sciences. This so-called turn produced new philosophical paradigms, such as object-oriented ontology,Footnote 34 vital materialism,Footnote 35 agential realism,Footnote 36 and actor-network theory,Footnote 37 to challenge the binary division of the world into human subjects and non-human objects. All such contributions share a critical view of the dominant status of the Anthropocene, but differ in the extent to which they move beyond this dominance in articulating the relationship between humans and non-humans.Footnote 38 They theorise matter as active, politically significant force that has meaning beyond social, political, or economic structures, and has agency that transcends politics of representation. From this lens, the concept of agency in International Relations and Security Studies can be problematised.
In International Relations and Security Studies, the new materialist and posthumanist approaches represent a criticism of the inadequacy of existing ontological and epistemological perspectives to capture the non-human agency, be it that of machines, animals, bacteria, the environment, etc. For many years in the discipline, agency has been tied to the human subject, and the capacity to act has been linked to cognition, intentionality, desires, and decision-making – qualities regarded as exclusive to humans.Footnote 39 Similarly, despite attempts by critical security studies (CSS) to widen security to include actors other than the state, actancy was still limited to humans and human collectivities. If threats are conceptualised as manifestations of suffering, and if the ability to express such suffering is a function of humans, then security is tangled to human subjectivity.Footnote 40 However, a strand of research focusing on materiality and non-human agency started to gain momentum in recent years. Examples include analysing the materiality of critical infrastructure,Footnote 41 borders,Footnote 42 emotions,Footnote 43 weapons,Footnote 44 among others.
Indeed, information technologies and the evolution of cybernetics – introduced by Norbert Wiener in the 1940s as the science of control and communication in the animal and the machineFootnote 45 – had a major impact on the evolution of thinking about human and non-human agency. If approached genealogically, posthumanism can be linked to the Macy Conferences on Cybernetics that took place between 1946–53.Footnote 46 In these conferences, the human subject was decentralised in relation to other objects, and in particular to information. Footnote 47 Cybernetics brought forward an analysis of information as a free-flowing entity among biological and non-biological systems, which opened the way towards blurring the lines between humans and machines. Both humans and machines were seen as autonomous and goal-directed entities; an idea that challenges the humanist subject. As one of its pioneers, Ross Ashby argued that cybernetics is not concerned with ‘what is this thing?’, but rather asks ‘what does it do?’.Footnote 48
Moreover, with increased digitisation, and the advancements in artificial intelligence (AI) and robotics, the level of control humans maintain over machines began to be largely challenged. Such developments form the basis on which some futuristic trans-humanist approaches in social sciences conceptualise the ‘posthuman’ and problematise ‘human’ as a category. From their perspective, humans are undergoing a process of evolutionary transformation towards becoming posthuman, that is, being replaced, outpaced, and outsmarted by the technological non-humans. They contend that technologies are growing autonomously beyond human comprehension or control.Footnote 49 For example, several studies on cyborgs, brain-computer interfaces, and biomedical engineering argue that human body has transformed as a result of ubiquitous technological developments, either through upgrade, enhancement, extension, or invasion.Footnote 50
However, acknowledging the autonomy of technology and machines beyond human subjectivity does not mean they have overtaken agency. Unlike the techno-reductionist views of transhumanism, the concept of agency can be problematised without resorting to biological, evolutionary, and hypothetical scenarios that assume humans are transforming into something else or being replaced entirely by the non-human. Rather, technology can be approached as one factor among many in breaking the binary division between humans and non-humans. This leads to what is often called a ‘flat ontology’, in which the traditional separation between the human and the technological non-human is blurred, and thus necessitating a study of materiality.Footnote 51
In technology, software, and media studies, the term ‘materiality’ is used in abundance, though rarely defined. In some instances, physicality is viewed as a defining character of matter, and therefore some studies refrain from using the term ‘materiality’ when they discuss properties of software or data for example, and use words like ‘stuff’ instead.Footnote 52 On the other hand, some STS literature use materiality to imply the social conditions that surround the development of technology and scientific discoveries; what could be described as ‘the materiality of practice’. Similarly, in media studies, the materiality of context is discussed in terms of the political economy or geographical considerations of media development, in addition to questions of ownership, control, reach, etc.Footnote 53 Nevertheless, acknowledging the vitality of non-human objects led to an increasing focus on materiality as agency in studying media infrastructures and digital technologies.Footnote 54 An example of this approach can be seen in studies that attend to the cultural role of information and ‘digital goods’ and their symbolic weight in material cultures, as well as the study of how information and digital networks are shaping space. Footnote 55
This article studies the materiality of cybersecurity by analysing the agency of information as an entity that matters. Although there is not one single definition for information, it can be generally divided into three categories that speak directly to the field of cybersecurity: syntactic information, in the form of signs, signals, or bits; semantic information, or the meanings conveyed through those bits and signals; and pragmatic information, when those meanings and ideas conveyed are new to someone.Footnote 56 While these multiple categories of information are central to cybersecurity, this article focuses specifically on syntactic information in the form of codes/software because they are considered the ‘centre of gravity in cybersecurity’, as a fundamental quality that distinguish cyber threats from conventional ones. All cyber threats must go through the syntactic layer to qualify as such; that is, to originate from code alterations or the use of malicious codes.Footnote 57
In the next sections, to counter the anthropocentricism in the cybersecurity literature, the article gives more weight in the analysis to codes/software, albeit without suggesting that their agency supersedes or replaces human agency. For example, in her introduction to vital materialism, Jane Bennett says:
I will emphasize, even overemphasize, the agentic contributions of nonhuman forces (operating in nature, in the human body, and in human artifacts) in an attempt to counter the narcissistic reflex of human language and thought. We need to cultivate a bit of anthropomorphism – the idea that human agency has some echoes in nonhuman nature – to counter the narcissism of humans in charge of the world.Footnote 58
Similarly, the article theorises syntactic information (that is, codes/software) as generative and productive of the meaning of cybersecurity, alongside humans and in interaction with them, even though it focuses more on codes/software as such. This non-human reading of cybersecurity is done by establishing a dialogue between the newly-emerging fields of the philosophy of information and software studies on one side and critical security studies (CSS) and cybersecurity studies on the other. This dialogue is essential for any attempt to theorise cybersecurity in CSS given the close links that the philosophy of information and software studies have with the sciences and technologies constitutive of the ‘cyber’. As argued by Rocco Bellanova, Katja Lindskov Jacobsen, and Linda Monsees, security studies need to ‘take the trouble’ of transcending disciplinary boundaries in order to understand the role of technologies through new frameworks of analysis.Footnote 59 Transcending disciplinary boundaries is particularly important for theorising non-human agency in cybersecurity given that, as argued by Tim Stevens, new materialisms have not sufficiently incorporated ‘information’ as an entity in their ‘conceptual schema’ and therefore did not engage with the important debates in the philosophy of information on the ontology of information in relation to the ‘matter’ that new materialism theorised for.Footnote 60
Additionally, the article takes the argument on the centrality of non-human agency in CSS a step further by delving deeper in the agency of information as a peculiar kind of agency. In emphasising the agency of matter, many strands of new materialism and posthumanism include all non-human things in their entirety without distinguishing between the agential capacities they possess. In addition, some approaches adopt a relational ontology according to which an object is only real if it has an effect on other objects. Bruno Latour's actor-network theory, for example, assumes that there is no force embedded in objects as such beyond their relations with other objects, from which they acquire agency.Footnote 61 Nevertheless, it could be argued that ‘all things equally exist, yet they do not exist equally’.Footnote 62 Non-human things are not all of one type and do not exercise the same form of agency.Footnote 63 Further, as noted by Graham Harman, flat ontology should not be an end in itself: it is not enough to reject the position of humans as the centre of ontology. Rather, the analysis should extend to investigating the distinctive features and powers possessed by different non-human entities beyond their relations with other objects.Footnote 64 On that basis, the article argues that if all matter in all security sectors have agency, cybersecurity is distinguished by the peculiar agency of codes/software as informational agents. That is to say, all matter matters, but codes/software matter differently – as will be shown next.
2. An informational account of agency
It is from this rich and complex ferment of information that the concept of agency emerges.Footnote 65
The concept of agency has always been central to the theorisation of information, even if not uttered as such. In one of its commonly used definitions, information is conceptualised as ‘the difference that makes a difference’ or the ‘distinction that makes a difference’, in relation to the Latin origin of the word ‘informare’ meaning to ‘shape’ or ‘form’. This definition indicates that information always has a purpose and seeks to achieve a particular change or transformation.Footnote 66 Just like energy heats up matter, information can also be added to matter and change it or give it form and structure.Footnote 67 Therefore, some theorists argue that what distinguishes our planet is the concentration of information intrinsic to its existence. Even if other parts of the universe have more matter or energy than Earth, none has more information.Footnote 68 Consequently, information has a strong relationship with causation. Some studies contend that all causal links are inherently information. This is because the idea of causation itself is about transferring a quantity of information between two or more states of a particular system.Footnote 69
This capacity of information to do things, be it order, change, or causation is the basis of many informational approaches to cosmology and evolution. According to such approaches, evolution is a complex process of information exchange,Footnote 70 in which information specifies what things should do.Footnote 71 They argue that if the question of life is in essence a question of physics, then it is ultimately about information that physical systems possess and the transition in the informational structure of matter.Footnote 72 This idea is also connected to Wiener's argument: ‘Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.’Footnote 73 Although it was not clear what Wiener meant by ‘information is information’, it can be inferred that he regarded information as ‘autonomous’; something that has a distinct structure.Footnote 74
This belief in the ontological primacy of information has echoes in some empirical studies that analyse the transformation of military conflicts in the information age. For instance, John Arquilla and David Ronfeldt – who wrote the influential paper ‘Cyberwar is Coming!’Footnote 75 – view information as ‘an essential part of all matter’, and that it is as fundamental to the world as matter and energy. Consequently, according to them, information ‘should be treated as a basic, underlying and overarching dynamic of all theory and practice about warfare in the information-age’.Footnote 76 Similarly, Myriam Dunn Cavlety and Elgin M. Brunner argue that information is the major source of power both in its material form of computers and infrastructure, and also in the ‘immaterial realm’ of codes. As they put it: ‘Information becomes a weapon, a myth, a metaphor, a force multiplier, an edge and a trope – and the single most significant military factor.’Footnote 77 Also, in his discussion of ‘network-centric warfare’, Michael Dillon contend that ‘information is the prime mover in military as in every other aspect of human affairs, the basic constituent of all matter’.Footnote 78
The argument that information has agency, and one that is peculiar when compared with matter or energy, can be made clearer by looking at how ICTs, digital information, and software engineering literatures define the concept of agency. As mentioned earlier, many posthumanist literatures derived their main ideas and assumptions about the agency of non-human things from cybernetics. Cybernetics introduced a behaviouristic conceptualisation of agency by focusing on the external, goal-oriented, and purposive behaviour of entities, rather than their internal properties. Alan Turing's famous paper titled ‘Can A Machine Think?’, published in 1956, posed a question that is still under discussion to this day.Footnote 79 Whether a computer/software has consciousness, can actually think, or even has emotional intelligence remains an open question that reflects the increasingly blurred lines between human and non-human agency in informational settings.Footnote 80
If information in general is the difference that makes a difference, codes/software as the syntactic manifestation of information can be seen as ‘organised array of differences’.Footnote 81 Although not all codes/software can be described as ‘intelligent’ agents in the same manner as AI, they nevertheless remain purposeful. That is why, technical conceptualisation of agency and agents, particularly in the computer science literature, often goes beyond the ability to simply do or act, towards human-like characteristics that ordinary matter hardly possess. Among the most important of these attributes is autonomy. Traditionally, autonomous agency was considered as one characteristic of living beings, through which they maintain their survival. Yet, the evolution of information systems has shown that autonomy cannot be exclusive to living beings or humans. In some computer science literature, autonomous agency is defined as an ‘autocatalytic system’ that can detect, measure, and constrain energy. This demands nuanced intelligent choices, or the ability to choose among various courses of behaviour in a way that is sensitive to the surrounding environment.Footnote 82 Autonomous agency thus requires an element of rationality, or the existence of desires on the part of the agent and an ability to act on best interest.
Another agential property is reactivity, or the ability of the system to react to its environment, interact with other human and non-human agents, and adapt its behaviour in response. This is also linked to the agent's proactivity, and being able to take initiatives, instead of just reacting to changes in the external environment.Footnote 83 Proactive behaviour is primarily goal-oriented and requires a minimum degree of intelligence that allows the informational agent to understand its internal and external environment, and to adapt its behaviour based on such knowledge. It should have an inferential capability through which it uses existing knowledge to work on abstract tasks.Footnote 84 In addition, it should be mobile and able to navigate in different systems and networks flexibly, while possessing human-like traits, such as reliability and trustworthiness. These capabilities, nonetheless, are not necessarily enjoyed by all informational agents with different levels of complexity; the more complex they are, the more of these properties they possess and vice versa.Footnote 85
The criteria against which the agential capacities of informational agents are measured in these disciplines is one important manifestation of how their agency is fundamentally different from the rest of things. For many computer science scholars, a powerful conceptualisation of agency is one in which the properties of informational agents are ‘conceptualised or implemented using concepts that are more usually applied to humans’.Footnote 86 As summarised by one study: ‘Agents are unlike other artefacts of society in that they have some level of intelligence, some form of self-initiated, self-determined goals.’Footnote 87 This is one reason why such literature uses the concept of agents rather than objects in talking about information systems. They see objects as entities that do not have choices of action and cannot make decisions, while informational agents do and can.
The agential capacities of codes/software
…software is somewhat excessive and vexed. It overflows its own context and creates new contexts. In many instances software is so complicated, so distributed and convoluted in architecture that it defeats comparison with any other technical object.Footnote 88
Studying the actions of codes/software and their operation is ultimately a study of agency. As Adrian MacKenzie argues, ‘code is agency-saturated’.Footnote 89 Even if it is primarily a textual entity, code is more than a ‘medium of description’; it is rather a ‘medium of execution’. This executability and inherent causal power of codes/software is a main property that distinguishes their agency from that of other artefacts.Footnote 90 For example, although ‘art-like objects’ usually have a human recipient, it is not necessarily the case for codes/software. Sometimes the recipients of codes/software are other machines or software, which in turn can generate codes.Footnote 91 In cybersecurity, a malicious software (malware) is targeted towards particular vulnerabilities (exploitable coding errors) in the adversary's system, not humans. Additionally, though they inhabit micro-spaces, codes/software are agents for the ‘automatic production of space’.Footnote 92 Malware, as argued by Balzacq and Dunn Cavelty, is capable of co-constructing spatiality by circulating within multiple spaces that cross sovereign boundaries.Footnote 93
Even if written by humans, once embedded in a digital machine, codes/software start operating automatically, telling that machine what to do or not to do. Here, machines can be considered the ‘final arbiter’ in operating codes, not the human.Footnote 94 In many tasks, starting from simply logging into the Internet, codes/software act autonomously and react to inputs and outputs automatically, often with no direct human intervention.Footnote 95 Machines are now automatically exchanging data, using electronic sensors, updating themselves, producing predictions and warnings, controlling traffic lights, authorising payment cards, opening and closing doors, etc.Footnote 96 As a result, codes/software have become more malleable, flexible, adaptable, and interactive to the outside world than other technologies and ‘material artefacts’.Footnote 97
The relative autonomy of codes/software can sometimes make them unpredictable, and potentially escape the span of human control. Their inherent unpredictability already starts with the way they are produced. Codes/software are not developed by a single person, but are usually engineered within big projects in which many programmers with varying levels of skills and knowledge participate. This process results in a very complex piece of codes/software that no one single programmer can claim they fully understand.Footnote 98 What is more, in most cases, codes/software are engineered through a process of trial and error. They are left to run and have a life of their own, while being tested and improved in the process. For this reason, codes/software are mainly engineered rather than designed, since they do not always follow what programmers dictate. Programmers almost have an ‘ignorant expertise’ in dealing with codes/software they helped producing.Footnote 99
Hence, although codes/software can be considered a human bid to control the digital, they maintain sovereignty over execution through self-enforceability. Their operation is never linear; they usually incur deviations and self-modification in execution. This is what one author described as ‘code drift’ to explain the many unplanned consequences, fluctuations, and transformations that occur in the operation of codes/software.Footnote 100 Added to this, programming is done by standardised, formalised software-enabled languages that facilitate the process of writing code. This involves a lot of abstractions that hide details that may seem unnecessary for the programming process. Although these abstractions make the job of programmers a lot easier, they also reduce their knowledge of and power over the codes they write. As argued by one study, automatic programming ‘is both an acquisition of greater control and freedom, and a fundamental loss of them’.Footnote 101 This is magnified when it comes to ordinary users who normally have no comprehension of internal codes and algorithmic processes beyond the graphical interfaces they interact with. Such interfaces give the human user an illusion of control and an imaginary of a ‘sovereign executive’, when in fact they are perpetuating users’ ignorance.Footnote 102 As put by Clare Stevens, ‘Malware and coding are materials that exceed human capacities to sense or understand them, so that they do not present themselves to us in unmediated fashions.’Footnote 103
Acknowledging that agency in informational systems is distributed among humans and non-humans raises an important question: if both humans and codes/software are agential, where is the line of responsibility?Footnote 104 This question primarily challenges the liberal-modernist understanding of humans as the sole agency that establishes causalities based on intentional decision-making.Footnote 105 This paradox is starting to gain momentum in cybersecurity practices too, with the increasing use of AI and machine learning. It is estimated that the AI market in cybersecurity will increase to $34.8 billion in 2025 from $1 billion in 2016.Footnote 106 AI adds an operational advantage to cybersecurity strategies since it is capable of overcoming the limited cognitive abilities of humans to handle huge amounts of data. Not only is AI growing in use in defence practices; experts also predict that new types of cyber incidents are likely to appear in the future given AI's capability of transcending what humans may consider impractical, such as labour-intensive spear phishing operations.Footnote 107 However, the complexity, uncertainty, and lack of transparency associated with anomaly-based AI technologies in cybersecurity raise questions about agency and decision-making between the human and the algorithm.Footnote 108 Because AI systems are inherently dynamic, understanding their operation and explaining their outcomes is not an easy task. That is why, when AI systems are attacked, detection becomes difficult, because reverse-engineering their operation to understand whether the outcome of their behaviour is a result of an attack or not is quite challenging.Footnote 109 As argued by Tim Stevens, the use of AI in cybersecurity poses a key question: ‘Where is agency in the new cybersecurity assemblage and who or what makes the decisions that matter?’.Footnote 110
It is important to note here that the article is not suggesting that it is impossible for humans to unpack the complexity of codes/software or that ordinary users cannot develop an understanding of their behaviour. It also does not deny that codes/software are in fact written by humans and therefore cannot be studied in isolation from what humans wanted them to do. The aim of this section is rather to showcase how the operation of informational systems and codes/software can challenge the idea of an in-control human in ways that other types of non-human things cannot. As such, the agency of codes/software assumed here is not that of intentionality, consciousness, or free will, but rather an agency of influence. Even if the previously mentioned code drifting is a result of a human error in coding, the consequences of this do influence humans in ways they did not necessarily envision or anticipate. The relative autonomy, reactivity, and proactivity of codes/software outlined above is what makes their agency and influence on human and non-human things peculiar.
This agency of codes/software is magnified when they are used maliciously. Malware are special kinds of codes/software. The most peculiar property of viruses and worms is not their maliciousness, because they are not malicious, per se, but rather their ability to copy themselves automatically, described as ‘self-reproducing automata’.Footnote 111 Whereas viruses require human action to activate them, like clicking a link or opening a file, worms even have the capacity to self-propagate across devices without this human intervention. Malware are also capable of performing multiple self-preservation techniques to avoid detection and elimination. For example, they can do what is known as stealthing, through which they hide their presence and make it difficult to detect them. This can be done by slowing down their operation or presenting a fake clean image of an infected file to an anti-virus program. Polymorphism is another self-preservation technique by which malware change their base code dynamically every time they run, while having the same functionality. A step further to polymorphism is metamorphism, which refers to malware changing their functionality as they propagate across different systems.Footnote 112 In short, malware are inherently active; they are constantly doing something or spreading somewhere, ‘almost like living’.Footnote 113 The relative unpredictability of malware can defy human control, even if operating through rationally predefined codes and algorithms.
Several unintended political and technical consequences that transcend the control of the initiator may result from self-perpetuating malware and self-modifying codes. They can spread to untargeted systems, they can be discovered due to an error in coding, and they can cause an overreaction from governments or media that was not initially intended. As Argued by Balzacq and Dunn Cavelty, malware can be approached as mediators with ‘transformative agency’ that is detached from the initiator's intent. Assuming that objects also enact spaces, malware is co-constitutive of the ‘space’ in cyberspace, and thus cyber incidents should be analysed within ‘the spaces they build themselves’ by spreading between devices in completely unplanned ways by their initiators.Footnote 114 This argument has far-reaching implications on security and the assumptions of human control imbedded in its logics, as will be explained next.
3. Emergent security: The logic of emergence and human control in cybersecurity
To capture the agential capacities of malware as informational agents in the construction of cybersecurity, the article proposes the logic of emergence. Emergence is a key concept in complexity theory, which is also linked to cybernetics, computer science, and chaos theory. The central assumption behind emergence is that a complex system will necessarily produce new, unexpected properties and will end up behaving in an unpredictable way.Footnote 115 As a result of interactions among their diverse parts, the properties of such systems will change dynamically in a non-linear process, producing emergent rather than resultant behaviour. Emergence and non-linearity are characteristics of self-organising and complex adaptive systems, in which outputs cannot be simply predicted based on inputs or analysing the individual parts of the system. The interactions that take place autonomously in these systems lead to emergence.Footnote 116 This notion of emergence is capable of countering the reductive assumptions of ‘ontological individualism’ and ideas about humans as the sole agents in the world.Footnote 117 It is a statement against an in-control human with a full capacity to understand and predict surrounding environments.
Emergence has a number of characteristics that can be employed in theorising cybersecurity as emergent security. Firstly, emergence is characterised by novelty. New features can appear as a result of dynamic changes, which cannot be simply predicted from existing properties of a system.Footnote 118 Secondly, emergence is contextual and relational. Emergent properties in every system are unique to its particular context and to its interactions with multiple agents.Footnote 119 Thirdly, emergent systems are not centralised, and their parts are not necessarily working towards achieving a particular, unified goal. They rather adapt and interact with dynamic changes in their environments, producing emergent results for the entire system.Footnote 120
Information is entirely connected to the idea of self-organisation, as shown earlier. Information systems are inherently ‘self-organising agent-based systems’ that act as autonomous agents. They are capable of collecting information and act upon it to pursue a certain set of goals, producing a wide range of future possibilities that cannot be easily predicted.Footnote 121 For that reason, the operation of information technologies and information-processing systems can be only described probabilistically, since it is impossible to accurately predict their future behaviour. Complex, dynamic, and decentralised information systems with emergent behaviour produce complex, dynamic, and decentralised security with emergent properties. The elements of autonomy and unpredictability in the operation of codes/software as described earlier generate a logic of emergence in cybersecurity. To be clear, this does not entirely invalidate human control. Rather, it suggests that the construction of security in cybersecurity is not always subject to the sole agency of humans and their intentionality. The elements of novelty, unpredictability, contextuality, and decentralisation associated with emergence can be found in the co-production of enmity and the subjects and objects of cybersecurity, as will be explained in the next two subsections.
The subjects and objects of cyber incidents
Cybersecurity is distinguished by its multi-stakeholder nature. It is co-constituted by every single user of digital technologies, from individual citizens to corporations and governments. However, the identification of the actors of interest in a certain incident and those entitled with taking the necessary measures to counter an ongoing cyber incident or attack is not always predefined and can have an emergent nature. Similarly, choosing security objects in a single incident may not also be entirely controlled by the attacker. The subjects and objects of cybersecurity, together with the resulting consequences of a cyber incident, are co-produced by the agency of malware in addition to that of humans.
Firstly, if all software contains bugs (coding errors), a malware is distinct given its ability to self-replicate, which intensifies its potential buggy nature. Bugs are more likely to appear in malware because they do not go through the same testing processes of normal software. Further, since malware does not operate in controlled environments, it becomes difficult to overrule bugs they may contain during propagation, and therefore increasing chances of unintended consequences. Once a malware is deployed, it becomes very difficult for the attacker to maintain control over its propagation or to accurately predict its behaviour. It can always affect unintended systems resulting in varying degrees of damage. Not knowing strictly which systems the malware will propagate to beforehand, in most cases, limits the attacker's ability to test its compatibility with such systems.Footnote 122
Secondly, even if propagation is meant to be limited, in practice, that might not be possible, particularly because attacks can hardly be stopped once started. To reach its target faster, the attack needs to spread widely and to propagate fast among non-target systems. A specific algorithm is usually used for target selection, either by simply choosing random IP addresses to infect,Footnote 123 or target neighbouring devices on the same local network as the victim. Once on the target's system, those algorithms can also choose other targets from email address books, DNS server, among other ways.Footnote 124 This relative independence of malware from their human initiator is one reason why some scholars criticise the use of cyber attacks by states as a purportedly more ethical choice than military attacks. They argue that unintended and uncontrollable potential implications of cyber attacks on civilian targets make the argument about their ethical use obsolete.Footnote 125 For the same reasons, some argue that collateral damages in cyber attacks are even much higher than military attacks.Footnote 126
There are numerous examples that demonstrate the inaccuracy of cyber targeting, leading to unintended consequences. As mentioned earlier, the NotPetya ransomware of 2017 is thought to have been targeted at companies in Ukraine. However, the target verification mechanisms of the ransomware did not work properly, and it ended up infecting a large number of targets far away from Ukraine and in several parts of Western Europe. Another example is an attack that exploited a vulnerability in a software called CCleaner. Due to an error in coding, the attack ended up infecting targets in Slovakia instead of its initial target: South Korea. But perhaps the most notable example in this regard is Stuxnet worm that, as widely believed now, was designed by the US and the Israeli governments to target the Iranian nuclear centrifuges in 2010. The worm was imbedded on the targeted system initially using a USB stick, before it started propagating. Stuxnet spread to multiple other unintended targets outside Iran, including Germany, China, and even the US itself. This happened despite the high level of sophistication of this worm, which many believe was designed over many years. It is thought to have included some methods of limitation that developers used to curb its wide proliferation. But these anti-propagation measures and complex design did not stop it from producing unintended consequences. Though it had a specific target, it transmitted to more than 100,000 computers in various locations in its original propagation.Footnote 127 The spokesman of Chevon, an American multinational energy corporation that was hit by Stuxnet, reportedly said upon discovering the malware in the company's systems: ‘I don't think the U.S. government even realized how far it had spread … I think the downside of what they did is going to be far worse than what they actually accomplished.’Footnote 128
Hence, it could be argued that although the humans behind cyber incidents can choose which software/hardware vulnerability to exploit, and in turn which private actor would need to issue patches to stop an attack, a lot is left for the agency of malware, and thus, many elements of the cybersecurity environment become emergent. As argued by Andrew C. Dwyer, cyber attacks should not be studied as linear relationships between hacker intents and resultant impacts through malware as tools.Footnote 129 Even with the existence of targeting mechanisms, the malware, per se, co-determines which systems gets infected at the end-users’ side during its propagation. By propagating across machines, malware create a network of cybersecurity actors who are then required to take steps to stop the attack, such as updating their systems to apply the necessary patches. In doing so, malware contribute to emergent, contextual actancy in every single incident.
For instance, in 2017, the WannaCry ransomware exploited a vulnerability in Microsoft operating system that allowed for remote execution of a code that encrypts files in the infected systems. The choice of the infected targets depended entirely on the agency of the malware during self-propagation, by scanning unpatched systems and deploying itself. It reportedly infected more than 230,000 systems in 150 countries, among which was the National Health Service (NHS) in the UK.Footnote 130 By infecting its systems, the malware put the NHS under the spotlight as a major cybersecurity actor. Much of the blame was directed towards the entity for not updating its systems to apply the patch issued by Microsoft before the attack.Footnote 131 This has also raised scholarly and policy interest in the importance of cybersecurity for healthcare. Here, the malware not only co-produced actancy, but was also agential in prioritising the health sector as a referent object at the centre of cybersecurity policies.
This does not only apply to big entities, but also to individual users who become influential actors when a particular attack takes place and infects their machines, as well as to other objects. An important example here is the Mirai malware, which was designed to target IoT devices and make them part of a botnet (a network of other compromised devices) to launch a distributed denial of service (DDoS) attack in 2016.Footnote 132 Millions of users were not able to connect to various websites as a result of this attack. The Mirari botnet, as argued by Tobias Liebetrau and Kristoffer Kjærgaard, constituted a ‘dance of agency’, in which malware constantly moved in ways that were not entirely predictable, transforming mundane IoT entities in 164 countries into bots with damaging effects.Footnote 133
These agential capacities of syntactic information, thus, undermine the idea of an in-control human securitising actor who manages cybersecurity environments. It is a demonstration of the power of codes/software in co-producing actancy and agency in cybersecurity, which in turn becomes emergent security. Furthermore, such agency creates a liability and responsibility dilemma in cybersecurity that resembles the prominent risk theorist Ulrich Beck's argument on the second modernity and its ‘highly differentiated division of labour’ that results in a ‘general complicity’ and lack of responsibility in the production of risk. As Beck said, ‘Everyone is cause and effect, and thus non-cause.’Footnote 134 But this dilemma in cybersecurity is not specifically just a result of modernity. Rather, as argued in this article thus far, it is primarily co-produced by the agency of codes/software.
One implication of conceptualising cybersecurity as emergent security is problematising ‘active cyber defence’ or ‘defend forward’ cybersecurity strategies by governments. These operations may include non-disruptive practices like hacking adversaries’ or allies’ information systems and maintaining a presence in such systems for intelligence gathering, or disruptive operations like ‘hacking back’ to recover stolen data, for instance.Footnote 135 Acknowledging the agency of codes/software and the elements of unpredictability and uncertainty in their operation challenges the idea of human control implicit in such understandings of ‘cyber defence’. As shown above, a malware used in targeting a certain system can spread to untargeted ones, even within the geographical location of the initiator, and therefore result in several unintended consequences. That is to say, although some states may conduct cyber intrusions with defensive motives in the background, acknowledging the agency of codes/software illuminates the risks of condoning such operations by labelling them ‘defensive’.
Enmity and the attribution dilemma
Establishing an enemy in cybersecurity is a complicated process that does not just reflect the agency of humans, but also that of codes/software. Elements of novelty, non-linearity, contextuality, and decentralisation are manifested in constructing enmity in multiple ways. Firstly, the agency of malware conditions the centrality of human intents in constructing cyber threats. This is because hostile intents and aggressors’ capabilities are not the only deciding factors for the occurrence and success of a cyber attack. For a cyber attack to take place, a vulnerability has to be identified in the targeted system first; and security vulnerabilities are essentially contextual: they vary across different systems. Besides, the implications of cyber incidents are mainly linked to the level of the target's dependency on information systems. The less cyber dependent the target is, the less effective an attack against it would be, making the impact of such an attack relational too. That is why, it is argued that in cybersecurity, ‘offensive capacity correlates with defensive vulnerability.’Footnote 136 Put differently, human intentionality is not enough to launch a cyber attack.
Secondly, cybersecurity is characterised by a high level of asymmetries between actors and their capabilities that often render any attribution-specific defence strategy insufficient. As argued by one study, ‘Whereas defenders in the physical domain can reasonably assume that pretty criminals do not have nuclear weapons and that foreign military powers will not rob the local McDonald's, this same categorical logic does not hold true in cyberspace.’Footnote 137 That is, attack sophistication is not necessarily an evidence for state sponsorship. Added to that, determining the cyber capabilities of a certain actor is often more a matter of speculation than knowledge. Unlike military arms, the non-physicality of cyber offensive tools makes them almost unobservable, unquantifiable, and in most cases, unrecognisable before an attack actually takes place.Footnote 138 This, in turn, puts more emphasis on codes/software than human aggressors in immediate cyber defence. It is coding vulnerabilities and exploits used to target them that lie at the core of such defence, even when enmity is more discursively prevalent.Footnote 139
Thirdly, the agential capacities of codes/software challenge attack attribution even further, making it primarily a process driven by profound uncertainties. For instance, malware may take control of a user's computer without their knowledge, creating a network of devices that work together to orchestrate an attack in a way that crosses geographical boundaries. The malware moves between devices across borders, scanning for the targeted vulnerability without consulting the attacker on the devices it affects. This makes it difficult to know if a certain device is acting as a bot or not and to determine who is controlling it, particularly given the irrelevance of geographical proximity as an element of attribution.Footnote 140 This also means that any system can be hijacked by a third party to implant attacks.
Accordingly, attribution is not necessarily part of an immediate response to counter cyber attacks. Although publicly published reports on attack attribution by the private sector exceed those of governments,Footnote 141 it is governments and some think tanks that focus more on threat attribution.Footnote 142 More specifically, intelligence communities are generally more concerned with attributing cyber threats to a particular enemy than private operators and defenders of information systems. This can be seen, for example, in the US government's emphasis on nation-states as a threat source, namely Russia, China, Iran, and North Korea.Footnote 143 However, on the practice-level and in immediate responses against a hostile cyber operation, the logic of enmity is not so central. The emergent properties of codes/software condition enmity, particularly in everyday cybersecurity that does not necessarily get publicised.Footnote 144 Thus, the enemy is not just a human attacker or a particular actor; the enemy also becomes the vulnerability and the malware: codes/software. This can be seen in the technologies and threat mitigation policies developed for cyber defence, which are primarily focused on the tools that adversaries use in hostile cyber operations, rather than determining who this adversary actually is. As argued by a representative of a network security company: ‘intelligence and law enforcement entities often prioritize attack attribution, while almost no emphasis is placed on attribution by those defending systems.’Footnote 145
Acknowledging that the immediate adversary in cybersecurity is the vulnerability and the malware, that is, codes/software as informational agents, calls for prioritising the production of secure-by-design codes over instrumentalising such codes in hacking into other countries’ systems as part of defensive strategies. In addition to the above-mentioned active cyber defence strategies, states are increasingly involved in black markets of vulnerabilities and zero-day exploits to build their cyber arsenals. Such practices increase the market price of those vulnerabilities and exploits that may end up in the wrong hands and undermine the long-term security of individual users and their overall trust in technology.Footnote 146 On the other hand, software manufacturers rush production processes to get their products into the market fast enough to compete for profits with an intention to ‘fix vulnerabilities later’. They also tend to prioritise functionality over security in software production, leading to a general culture of acceptance of software insecurity.Footnote 147 Although perfect cybersecurity does not exist and there will always be another bug in every software, shifting the conceptualisation of the adversary in cybersecurity from the human to vulnerabilities and malware (that is, codes/software) challenges such practices by state and private actors and emphasises the dangers of weaponising codes in cyber operations that implicitly assume a high level of human controllability.
Conclusion
The article analysed the agential capacities of codes/software as a form of syntactic information, and the implications such agency have on cybersecurity construction. Instead of instrumentalising information technologies or analysing them as a mere capability that influences power relations among actors in international politics, the article focused on the agency of information in and of itself. It interrogated the ontology of codes/software as informational agents, their intrinsic agential properties, and how they influence the agency of other human and non-human agents in cybersecurity. Such agency, the article argued, cannot be sufficiently studied by focusing on human discourses and linguistic utterances alone, as is the case in many theoretical cybersecurity literatures.
To capture the agency of codes/software as informational agents capable of co-constructing the logics and politics of cybersecurity, the article put more emphasis on codes/software, per se, albeit without suggesting that their agency supersedes or replaces that of humans. It explored how codes/software's self-organising, dynamic, and complex nature, as well as their emergent behaviour, often leads to complex, dynamic, and emergent security. Conceptualising cybersecurity as emergent security illuminates the limitations of human control and intentionality in managing cybersecurity environments, and the ways in which codes/software challenge traditional assumptions of enmity and co-produce the subjects/objects of cybersecurity. As such, a shift towards the logic of emergence is an attempt to distance cybersecurity from the anthropocentric confines of traditional theories of security and to present a non-binary framework in which the agency of human and non-human actors can be studied.
Through this non-human reading, the article contributes to the study of the materialities of cybersecurity beyond contexts and non-state actors, using an interdisciplinary approach that builds upon the philosophy of information and software studies – fields that have direct links to the evolution of the technologies that cybersecurity aims to protect. This is done to challenge perceptions of human control in constructing the security of information systems that evolved in paths humans could not fully envision; that operate in ways they cannot fully predict; and that produce threats they are not able to completely manage. Further, by emphasising codes/software as peculiar non-human entities that differ from the matter that new materialism theorised for, the article demonstrated the need for investigating the specificity of the different types of ‘things’ and the different agential capacities they possess, instead of dealing with them as one homogenous category.
Acknowledgements
This article is indebted to conversations with and guidance from Stefan Elbe and Stefanie Ortmann and their generous feedback on my PhD thesis, which this article is part of. I extend my thanks to Tim Stevens and his feedback on some of the arguments made in this article as initially written in my thesis. I am also grateful to the three anonymous reviewers and the editors of RIS for their very helpful and constructive comments. This article is based on PhD research funded by the University of Sussex's Chancellor's International Research Scholarship.