Introduction
In recent years, highly publicised cyber-incidents with names like Stuxnet, Flame, or Duqu have solidified the impression among political actors that cyber-incidents are becoming more frequent, more organised and sophisticated, more costly, and altogether more dangerous, which has turned cyber-security into one of the top priorities on security political agendas worldwide.Footnote 1 There is a strong degree of consistency in what drives this threat perception: The vulnerabilities of a ‘sprawling, open country knitted together by transportation, power and communication systems designed for efficiency not security’Footnote 2 and the disembodied adversaries that are seen as likely to take advantage of these vulnerabilities through the anonymity provided by information networks.
This view has two consequences. First, it often leads to cyber-security being presented as operating at one spatial plane, that of a network. The term network conjures up ideas of structured interconnections with regards to the texture of cyberspace.Footnote 3 From this follows a specific understanding of what is at stake in cyber-security: On the one hand, the protection of the mobility of data, and on the other, the stability of networks of relations that compose and sustain cyberspace. Ultimately, the network is controllable – and such control is coveted. Second, the network is seen just as a medium through which so-called ‘malware’ (malicious software), which causes cyber-incidents transits.
Some scholars argue that there is not one, unique cyberspace, but many spaces. For instance, Nick Bingham touts the illusion that ‘cyber-space as a singular exists at all’.Footnote 4 Stephen Graham suggests that ‘cyberspace … needs to be considered as fragmented, divided and contested multiplicity of heterogeneous infrastructures of actor-networks’.Footnote 5 And yet, all these scholars still tend to assume that the different spaces thus enacted remain ‘networks’. The main task of cyber-security experts in this view is to account for how and under what conditions ‘cyber-threats’ in the form of malware sail through different networks, and develop strategies to counter them effectively.
In this article, we argue that this view is too simplistic. We show that cyber-incidents have multifaceted spatial effects, which condition both different understandings of cyber-security and the kind of operations it commands or accommodates. To develop our argument about how cyber-incidents and international politics are related, we draw on Actor-Network Theory (ANT) and its analytical toolbox. ANT is a heterogeneous conglomerate of ideas, with origins in Science and Technology Studies. It follows a relationalist tradition and focuses on ‘dynamic relations between scientific and political sites’,Footnote 6 it rejects the dualism between the social (human) and the material (nonhuman) in the study of the social and it has close ties to the post-structuralism of Foucault and Deleuze, but tends to be more empirical.Footnote 7 ANT is currently gaining prominence in International Relations and security due to the new types of issues and research questions brought on by the ‘material turn’Footnote 8 – which signifies an interest in the importance of artefacts, natural forces, and material regimes in social practices and systems of powerFootnote 9 – as well as the ‘practice turn’Footnote 10 – which takes organised forms of doing and saying (‘practices’) as the smallest unit of analysis rather than actors or structures.
We base our analysis on work by ANT-scholars John Law, Annemarie Mol, and Vicki Singleton in particular. These researchers suggest that objects come in various configurations, each associated with spatial processes that prompt or support a recombination of relationships.Footnote 11 Specifically, objects can emerge as ‘volumes’ thriving in a regional Euclidian space, as ‘networks of relations’ configuring a network space, and as ‘flows’ that continuously adapt their shape in order to generate a fluid space. The article suggests that the study of cyber-security should emphasise the objects that circulate within different (cyber)spaces, thereby co-creating them. In cyber-security, these objects are malware. The different spaces created by malware have implications for the way we conceptualise cyber-security, the processes that bring actors together, and the type of interventions that are made possible.
The article combines conceptual groundwork with an examination of cyber-incidents or malware. It has five sections. Section I contextualises the discussion of cyber-security. We situate cyber-security’s concerns in the wider ambit of ANT in order to highlight both how its material background is under-theorised and the effects on our understanding of cyber-security is not sufficiently considered. Section II conceptualises cyber-incidents through ANT-concepts. By unpacking the concept of malware, we claim that ANT assumptions on spatiality enable us to characterise cyber-incidents as active agents of change. In particular, Section III examines these questions with respect to the nature and functions of different spaces generally and the character of mediators specifically. Section IV focuses on the enactment of cyber-security through a discussion of malwares’ performances. In Section V we show the analytical purchase of this typology with the help of a brief case study. We illustrate how an individual malware (Stuxnet) worked through these different spatialities and how this resonates at the level of politics. In the conclusion, we reflect on the consequences of such a reconceptualisation for cyber-security and security studies more generally.
Situating cyber-security research in security studies
The majority of books, articles, and reports on cyber-security to date remain policy-oriented and problem-solving. The two main questions tackled are ‘who (or what) is the biggest danger for an increasingly networked nation-society-military-business environment’ and ‘how to best counter the threat’.Footnote 12 Theoretically guided or empirically oriented academic research is still relatively rare.Footnote 13 In specific, despite the significance of cyber-incidents in the larger policy discourse, we have yet to understand the effects of them on politics, in particular their role in shaping threat perceptions and ultimately policy responses.
Still, it is possible to distinguish between two bodies of literature of potential relevance: The first is produced by the ‘Munk School’, which has consistently focused on issues like (electronic) surveillance and censorship and is thus mainly concerned with the creation of more insecurity by (state) actors through cyber-means.Footnote 14 It has, however, not theorised the link between cyber-incidents (even though it looks at them empirically) and politics. The second, situated in the larger vicinity of critical security studies, is a body of literature by scholars that have used frameworks derived from (or inspired by) Securitisation Theory to see how different actors in politics have tried to argue the link between the cyber-dimension and national security.Footnote 15 In a similar vein, some recent articles have focused on metaphors in the cyber-security discourse to explain political response.Footnote 16 These texts support observations made elsewhere that the process of securitisation in a given socio-political community is not restricted to one setting and one type of audience only, but often involves several, overlapping and multiple ones,Footnote 17 or that there are different political functions of and strategies behind security utterances.Footnote 18
Overall, critical security studies’ engagements with cyber-security remain analytically thin: First, cyber-security is a type of security that enfolds in and through cyberspace, so that the making and practice of cyber-security is at all times constrained and enabled by this environment. This factor is ignored by most of the literature. Second, scholars often focus solely on speech acts by political elites and therefore do not see how these discursive practices are facilitated or thwarted by preceding and preparatory practices of actors that are not as easily visible, also outside of governments.Footnote 19 Third, due to the emphasis on official statements by ‘the heads of states, governments, senior civil servants, high ranked military, heads of international institutions’,Footnote 20 existing scholarship grasps a limited expression of high urgency cyber-security (usually cyber-war).
Cyber-security is both less and more, however. It is less because it is not only and not very often about situations of greatest urgency. It is a multifaceted set of technologies, processes, and everyday practices. And it is more, because multiple actors use different threat representations employing differing political, private, societal, and corporate notions of security to mobilise (or demobilise) different audiences. Cyber-security is co-produced by every private computer user, by computer security specialists in the server rooms of this world, by programmers, by Chief Information Officers (CIOs) or Chief Executive Officers (CEOs) deciding on cyber-security investments, by security consultants, by cyber-forensics, by regulatory bodies and standardisation organisations, and only last by politicians and other government officials that interpret digital events and (re)act to them in the form of verbalised expectations and fears or ultimately, policies.
In the entirety of the literature, cyber-incidents are recognised as important – but they are not conceptualised as active drivers of cyber-politics.Footnote 21 Traditional security studies see cyber-threats as objectively given: matters of cyber-war, for example, can be identified as such, given specific ways of defining it.Footnote 22 Why cyber-incidents are linked to different threat categories, from which specific responses are deducted, has not been of interests. Constructivist or critical research in turn has looked at how security-meanings are constructed through the connection of the cyber-prefix to well-known threat categories, and how concepts such as ‘cyber-war’, ‘cyber-terror’, or ‘cyber-crime’ generate political effect. However, that literature fails to address how specific interpretations of cyber-incidents happen first and foremost around the material ‘realities’ of computer disruptions in technical communities and how these interpretations then serve as a basis for political action.
This article aims to close part of this gap by focusing on how cyber-incidents shape, maybe even transform cyber-security politics by stabilising or challenging different kinds of (political) imaginations and interventions. We claim that in order to understand the role (and agency) of cyber-incidents, we must understand what they do – how they perform – in their environment, before they are interpreted by actors in political processes. Therefore, we conceptualise cyber-incidents as deliberate disruptions of routine and everyday cyber-security practices, designed to protect networks, computers, programs and data from attack, damage, or unauthorised access. This definition follows the standard understanding of cyber-security in the technical realmFootnote 23 and highlights the importance of taking the technical-material (referent) object in this danger discourse (computers and computer networks) and the actors directly involved in securing them seriously.
Conceptualising cyber-incidents through ANT
IR and security literature is not yet overly familiar with ANT’s vocabulary, though there are recent attempts to evaluate the most effective ways in which ANT could extend IR’s analytical depth.Footnote 24 Clearly, the growing importance of materiality, process philosophy, and a focus on practicesFootnote 25 puts ANT in an interesting position within IR, as it combines those concerns with a resistance to the anthropocentrism that used to characterise theories of IR.
The goal of this section is to offer a brief overview over some of ANT’s concepts that help us understand under-researched aspects of cyber-security. In many ways, ANT is difficult to define and delineate precisely, for two main reasons: The first is that ANT comes in different names and shades: ‘sociology of translation’, ‘actant-network’, or ‘actant rhizome ontology’, are some of the common labels often used to designate ANT. The second reason is that ANT is the product of a sometimes-bewildering range of theoretical lineages. Therefore, capturing ANT’s twists and turns in any depth is impossible (and probably unnecessary) within the context of this article. We therefore only emphasise those concepts and principles that seem most promising for an analysis of cyber-security – without any claim for comprehensiveness. In order to identify them, we first look to conceptualise and ultimately define cyber-incidents. We then introduce the main concepts from ANT-research that help us to further understand the most important elements of such a conceptualisation.
The features of cyber-incidents
We conceptualise cyber-incidents as deliberate disruptions of normalised cyber-security practices by malware, leading to different effects on (political) imaginations and interventions.Footnote 26 In what follows, the different elements are unpacked: deliberate disruptions, normalised cyber-security practices, malware, and effects.
In cyberspace, operations are conducted with the help of software. In general terms, software consists of ‘code’ – a distinct number of lines of computer language, a ‘program’. Programs are written to carry out physical computations – for this to work, a physical implementation of the language is required. Thus, every piece of software must spell out how the constructs of the language (abstract mathematical notions in the form of syntax/semantics) are to be physically instantiated (implementation). As a consequence, programs always have an effect and code cannot just exist as an amalgamation of symbols, it always (also) exists in its execution or in its becoming, in other words, in its performance.Footnote 27
In fact, it is only through its performance that software becomes perceptible and politically relevant, because without it, ‘code is imperceptible in the phenomenological sense of evading the human sensorium’.Footnote 28 Furthermore, this performance is not purely technical. Software also has a distinct relationality to ‘the outsides in which it is embedded’,Footnote 29 be it (for example) to interfaces of mobile technology, games, or browsers, but also highly abstract concepts such as ‘the economy’, ‘national security’, etc., because its purpose is to do something. Moreover, code directly effects, and literally sets in motion, processes in the technical, but also the social, and political realm – and thereby can have ‘immediate and political consequences on the actual and virtual spaces’.Footnote 30
Importantly, the ‘goodness’ or ‘badness’ of software cannot be determined before said performance and its interpretation because it always incorporates a range of possible becomings in its code.Footnote 31 However, cyber-incidents do not happen in a meaning-free space – many stabilised social practices and meanings exist that shape and restrict the way they can be interpreted. Because computer operations that happen without the knowledge or consent of the owner of the computer have been socially constructed as illegal and negative over the years,Footnote 32 software used for these operations is called malware, a portmanteau word for malicious software, which presupposes malicious intent. In general, malware is deliberately designed to interfere with computer operations, that is, to record, corrupt, or delete data; or to simply spread itself to other computers and throughout cyberspace.Footnote 33 Importantly, however, malware could also be seen as means to a ‘good’ end, at least in the eye of the person using it to achieve certain goals:Footnote 34 stealing data that is kept secret by the government because it contains discriminating information, and then demanding justice. Because the intent of the person causing the cyber-incident is almost impossible to know with enough certainty when the incident occurs (and in many cases is never revealed), intent plays a marginal role in the classification of cyber-incidents or is simply inferred, by using the cui bono logic (to whose benefit).
Quasi all the immediate responses to malware are reactive: the malware’s performance always comes first. The reason for this is that malware ‘exploits’ open (often unknown) vulnerabilities in the information infrastructure. Due to a variety of (historical) technological, economical, and social reasons, vulnerabilities can be exploited abound in cyberspace.Footnote 35 Those actors interested in programming and using malware will pick those vulnerabilities that serve their specific purpose or intent best, and will write a program to that end – whereas software companies interested in selling commercial products have almost no incentive (and sometimes not even the ability) to patch them in advance.Footnote 36
The moment in which malware becomes perceptible, sometimes even literally visual through performance, is when the previously routine processes and practices of cyber-security become destabilised or even break down. We understand cyber-security as a multifaceted set of practices designed to protect networks, computers, programs and data from attack, damage or unauthorised access – in short, it is standardised practices by many different actors to make cyberspace (more) secure. As we are going to show in more detail later, reactions of different actors to malware and the breakdown of routine that follows are multifaceted, just as malware is multifaceted – they depend on the actual-conceived impact of the computation and the interpretations that follow, which in turn are based on different practices of cyber-security.
Introducing ANT concepts
There are (at least) three points from the ANT literature that highlight important aspects of the link between cyber-incidents and their effects: agency of nonhuman entities and their role in upholding or changing practices (concept of actants); what happens in the moments of break-down of normalised processes (concept of depunctualisation); and the relationship between objects and space (performance of spaces). The last is the most important aspect of our theory.
Actants:ANT rejects the dualism that tends to separate the social (human) from the material (nonhuman): human and nonhuman entities can equally initiate action. Under the principle of ‘generalised ontological symmetry’ different kinds of entities (humans and nonhumans) are involved in relational productive activities.Footnote 37 On the one hand, it means that humans and nonhumans share the same capacity for agency. On the other hand, it means that both human and nonhuman meanings are ‘effectively generated by a network of heterogeneous, interacting materials’.Footnote 38
In ANT-terminology, entities that can change actions or practices are called ‘actants’. ANT knows different types of actants, differentiated by how they constitute specific networks by virtue of their relations with other actants.Footnote 39 The most interesting actants for ANT-scholars are mediators, because they ‘render the movement of the social visible to the reader’Footnote 40 and they always affect whatever flows through them. Malware understood as mediator focuses our attention on how it circulates and affects circulation in cyberspace. It also allows us to give malware transformative ‘agency’ of its own, detached from the ‘intent’ of the person who wrote the code.
De/punctualisation: Using ANT concepts, we can argue that social order in cyberspace is produced by the heterogeneous relations within and through relevant networks. The ultimate aim of cyber-security as a practice is to stabilise these networks, which are there to execute a specific performance, namely the uninterrupted provision of specific data flows for the efficient functioning of the economy, society, the state, etc. The success of any actor-network is usually related to the degree to which it does not appear to be a network that demands effort for keeping it together, but rather a coherent, independent entity (punctualisation).Footnote 41 This desired state, however, is repeatedly challenged by cyber-incidents. In ANT-terms, these moments of disruptions are called depunctualisation Footnote 42 because they make network performances ‘break down’, at which point the different parts of a network become visible to the observer.Footnote 43
These moments of disruption brought on by cyber-incidents as mediators are not only of key importance for the study of cyber-security, but they also invite for engagement with methodological issues associated with using ANT in new, unfamiliar settings. In particular, the focus on (lab) practices that foundational ANT research was concerned with comes with the demand to ‘follow the actors’, observe, ‘as far as possible, what they do as much as what they say’.Footnote 44 Hence, variations of ethnography have often been the method of choice for ANT.Footnote 45 However, ethnographical methods are bound to encounter major obstacles when it comes to security topics – for instance, what to do if there is no access to actors/practices, that is, given the level of secrecy that is typical for national security issues? ANT’s particular emphasis on nonhuman objects coupled with moments of depunctualisation opens up opportunities for the study of things not easily observable otherwise.Footnote 46
Spatial Performance: For Mol, Law, and Singleton, ‘spaces are made with objects’,Footnote 47 or in other words, actants perform space(s). The task of ANT is to account for the circulation of objects and the structures of relations they activate. In other words, all social relations are complex assemblages of socio-technical entities and any phenomenon derives its form and content from the web of relations to which it partakes. In this light, entities are assembled and sustained through practical activities, which form and happen in networks.
By situating the concept of practically formed network at the centre of its investigation, ANT parts way with classical geography in three significant respects. First, it challenges imaginations or associations of the classical Euclidian metrics mapped on position, proximity, and distance as explanatory foundations: places are the effects of distinctive relations.Footnote 48 Second, whereas classical geography tends to conceive spaces as a pure container for objects, ANT reconciles spaces and objects, arguing that spaces are performed by and through objects. Therefore, the separation between objects and spaces is artificial. Likewise, ANT does not treat the difference between ‘digital’ and ‘physical’ spaces as a matter of essence, but as the expression of their specific framing properties. Third, Law and others suggest that objects themselves come in various configurations, each associated with spatial processes that prompt or support a recombination of relationships. Specifically, objects can emerge as ‘volumes’ thriving in a regional Euclidian space, as ‘networks of relations’ configuring a network space, and as ‘flows’ that continuously adapt their shape in order to generate a fluid space.Footnote 49
The emergence of objects relative to the enactment of space is crucial to understanding topology’s analytical importance for cyber-security. If actants perform ‘several kinds of spaces in which different “operations” take place’,Footnote 50 then cyber-incidents should be read and ultimately tackled in the spaces they build themselves, not in the one that is supposed to predate their enactment. Furthermore, if different objects can enact various spaces, then we can assume that no such thing as ‘the’ cyberspace exists. At the same time, cyberspace is not exclusively derivative of the sum of threats involved; nor is it a simple function of the actants that are put into relations. Rather, an approximate (but imperfect) image is to say that cyberspace exists by virtue of what circulates within its meshes. Malware therefore co-constitute notions of cyberspace and cyber-security.
With these ANT-concepts in hand, we can now reformulate our initial definition of cyber-incidents (deliberate disruptions of normalised cyber-security practices by malware, leading to different effects on political imaginations and interventions) like this: Cyber-incidents are depunctualisations of cyber-security networks by mediators in the form of malware, with effects in regional, networked, and fluid spaces. How these three topologies look like and how they interact is specified in the next section.
Topologies and security
Regions, networks, and fluids come with distinct ways of envisioning order and disorder, based on the relationships between different objects that form the respective space. Therefore, they also come with differing notions of what a threat is and how to secure against it. At the same time, the three spatialities are interconnected in fundamental ways: While any space attempts to situate itself as the ‘other’ of alternative spaces, it is in fact profoundly linked to the existence of these spaces.
Regional topography has shaped IR imagination for a long time and in particular through the invention and institutionalisation of borders and sovereignty.Footnote 51 Regions are the most familiar, and straightforward spaces that IR and security scholars encounter. Regions connect and unite what is close and draw boundaries around elements that belong together. In a regional space, divisions between inside and outside are strict, places are exclusive, and overlaps between locations are not tolerated. Regions cluster objects together. Their primary aims if not their results are to suppress or minimise the differences among objects that reside inside and, correlatively, to play out the differences with what lies elsewhere. Those differences are meant to be solid.
At first sight, networks undermine most regions’ basic assumptions, in part because networks establish relationships between elements that, in regions’ terms, are distant on the map. Put more generally, the localisation of objects does not determine their proximity and, as such, boundaries are not decisive in drawing out objects’ identity. ‘Networked threats’ are a recurrent topic in the security discourse, wherein networked forms of organisation are seen as direct challenge to hierarchical forms of organization’,Footnote 52 seeing that they tend to ‘represent a threat to the spatialized forms of intelligibility and control’.Footnote 53 In ANT-terms, a network space is generated by a network-object. Because this claim can easily be misinterpreted, it is worth repeating that for ANT, many objects, from texts to vessels through software, are ‘networks’. In order to preserve their integrity, they depend on a stable structure of relations between their internal components and the external configuration of interactions they fold in. To move as a vessel from Amsterdam to Lagos (Euclidian space), the ‘relative syntactical positions of the vessel’ (network-space) have to be held together, otherwise the network collapses.Footnote 54
On closer inspection, then, networks replicate regions’ concerns with the ability of the object to preserve its integrity when it displaces itself from one location to another;Footnote 55 indeed, there is a relational isomorphism between regions and networks.Footnote 56 Conceived in this way, networks sustain what David Harvey calls ‘cogredient’, that is, ‘the way in which multiple processes flow together to constitute a single constant, coherent, though multi-faceted time-space system’.Footnote 57 The security of a network is very much about ‘keeping everything in its place’.Footnote 58 In this context, objects are perceived to be threatening if they disrupt either the ‘cogredient’ or the functional integrity of the network. If that happens, the network-object loses its coherence, and the syntactical relations that held it rigid are henceforth subject to constant changes; in fact, everything becomes variable. This is the realm of fluid spaces and fluid objects.
There are no clear boundaries in fluid space and the objects that are generated and generate it are not well-defined and not clearly visible. It also is a ‘world of mixtures’,Footnote 59 in which previously separated categories, like cause and effects, or good and bad, are intermingled. Unlike networks, objects within a fluid space do not depend on one another. Networks tend to crumble if any of their constitutive parts is detached from the relational architecture that sustains them. One important aspect of this is what a network does when it encounters such challenge; how it tries to maintain the identity of its elements or how other networks, with different characteristics, take over.Footnote 60 By contrast, fluids are more resilient to the changing character of their objects, since order is not at all important in this space.
There are important differences between networks and regions to be sure, but security in both spaces is defined as stability and immutable continuity. In fact, networks and regions preserve their continuity by identifying crucial centres or points of vulnerability that must be defended against intrusions. Security is fundamentally about protecting the ‘obligatory points of passage’.Footnote 61 Fluids on the other hand have a very singular complexion, which has implications for the way security is understood therein. In fluid spaces, as Mol and Law put it:
there is no single standpoint to be defended in order to preserve continuity … For since continuity has nothing to do with the integrity of territory in a fluid space, there are no fixed frontiers to be patrolled. Neither is there need for police action to safeguard the stability of elements and their linkages – for there is no network structure to be protected.Footnote 62
As they infiltrate other spaces, fluids absorb networks and regions, though usually in part and rarely in total. Sometimes, networks and regions melt into fluid spaces. Most relevant for security studies:
networks tend to panic when they fail to secure network homeomorphism – at which point what I am claiming to be the … necessary fluidity of objects to networks becomes both visible and Other, represented as a failure and therefore a threat.Footnote 63
Spaces and the occurrence of cyber-disruption
In this section, we show how cyber-incidents and cyber-security practices are interlinked – more specifically, how malware manifests itself in and actively performs three different topologies: regions, networks, and fluid space. The actors involved in these practices are mainly private sector actors with a computer science background. The separation into these different regions is somewhat artificial, because the respective performances are closely interlinked and often happen almost simultaneously, but they serve an analytical purpose.
Regions – the manifestation of malware in physical space
The performance of regions in cyber-security is linked mainly to the manifestation of malware in physical space, or rather, inside computers or other hardware. Information infrastructures – computers, servers, mobile phones, tablets, etc. – are situated in clear and identifiable geographic locations, inside bordered sovereign territory. Indeed, even though part of cyberspace might be hard to ‘grasp’ because we see them as ‘virtual’, it is still fundamentally grounded in physical reality, in ‘the framework of a “real” geography’.Footnote 64 Physical network infrastructures, which ensure the flow of data from one physical node to another such as fibre optic cables, are inscribed in Euclidian space. Malware travels through these cables before it becomes visible first through its (technical) effect on a machine with a specific vulnerability that it can exploit, and second through various cyber-security techniques aimed at preventing, detecting, and removing malware.
If a new type of malware depunctualises standard practices, computer specialists working for anti-virus companies identify the program code and then update their software, which runs on millions of machines worldwide, with that information. If the person responsible for the security of the machine updates their version of the anti-virus software as regularly as is expected of them, the local computer is now equipped to detect the malware (either while in transit or already ‘on’ the computer), based on the now known patterns of data within its executable code. The malware, which has been assigned a specific signature in this process, can now be isolated and removed – it has been given a new type of visibility, which allows it to be traced, identified, deleted, counted, classified, etc.Footnote 65 Some malware that has not yet been given a signature (or constantly changes its signature in an attempt to dodge discovery) but is similar to already known malware can be identified with heuristic approaches, which do not look for a specific pattern of data, but ‘bad behaviour’ of software.Footnote 66 This type of defensive software knows from previous digital threats how malicious software acts and will intercept code according to this knowledge.Footnote 67
Cyber-security reports, most prominently produced by the anti-virus industry or specialised consultants, visualise malware based on the collection of data about infected machines, which is then aggregated in terms of infection rates per country. Footnote 68 The practice of aggregating malware infection this way performs a version of the social in which space is exclusive: there are neat divisions with no overlap, based on a comfortable geography of well-known political entities. This allows identifying the ‘good’ from the ‘bad’ as well as the areas that are most in need for intervention. Whereas every locale in which there are computers/computer networks is a (potential) space for cyber-security, the focus on infection rates per country easily translates into regions of in-security: For example, one company lists Taiwan, China, and South Korea as the countries with the highest percentage of malware-infected computers in the world. The second area of cyber-in-security is South America: Argentina, Peru, Brazil, Chile, Colombia, and Venezuela all have ‘above average’ infection rates.Footnote 69 On the other hand, this type of visualisation also helps to single out those countries with the least infection rate as ‘good cyber-citizens’, for example, according to Microsoft’s Security Intelligence Report, Austria, Finland, Germany, and Japan.Footnote 70
Beyond regions based on malware infection, ‘bad’ cyber-behaviour, linked to hot spots of malware production, is also singled out on the basis of reports by private computer security companies and more in-depth research and analysis by the law enforcement community.Footnote 71 Key regions of insecurity were Romania as cyber-crime havenFootnote 72 and China for cyber-espionage and cyber-war activities,Footnote 73 at least before the Snowden revelations in June 2013. Such regions of insecurity then become a specific focus of the international community and various types of political interventions.
Networks – stabilising cyber-security practices
Network spaces are performed by similar cyber-security practices – this creates places that are ‘close’ to one another with similar sets of elements and relations.Footnote 74 There are different overlapping network spaces that emerge: first, a network of infected computers, second, a network of malware removal, and third, networks of behavioural cyber-security norms.
The first is a network of (infected) bodies, computers, all with the same vulnerability and/or type of infection. For example, the computer virus Code Red, released in 2001, exploited an operating system vulnerability found in machines running Windows 2000 and Windows NT. Each and every such machine was thus part of the particular network space Code Red performed. Once infected, the worm made infected machines execute distributed denial of service (DDoS) attacks against some websites, including the website of the White House:Footnote 75 all of these machines were places of sameness at that moment in time. In the current cyber-security debate, one such network of sameness is the ‘botnet’, robot networks of a large number of malware infected machines that perform specific (remote controlled) tasks together, like spamming or DDoS attacks, (most often) without knowledge of or consent by the owner of the machine.Footnote 76
The second network performed by malware is a network of cyber-incidents removal: Anti-virus software, once updated, performs the same tasks on millions of machines worldwide. Importantly, these practices are also the basis for in-security measurement, which creates the data needed for the performance of regions as described earlier. For example, any computer with a Microsoft operating platform or a specific anti-virus software is turned into a similar place; they are close because they are all part of the same measurement network. Even though there are a variety of different methodologies used for how to measure in-security and for how to detect and fight malware, all these practices are used to identify and characterise malware and their effects – and the methodologies are very similar.
As an extension of this similar-type-practice network, all well-known and standardised types of information assurance practices, enacted on a constant basis by computer network personnel even without a depunctualisation effect, perform networks as well. Among them are, prominently, activities related to Information Security standards released by the International Organization for Standardization (ISO),Footnote 77 or risk assessment methodologies that are used and implemented by computer security specialists, like those propagated by ENISA in Europe.Footnote 78 On top of that, cyber-incidents ‘makers’ also perform networks, by forming an organised cyber-crime market operating across the globe, with orderly strategic and operational vision, logistics and deployment.Footnote 79 The same applies to hacker communities or hacker collectives that display similar practices or adhere to the same ‘hacker ethics’.
Importantly, in the cyber-security networked space, the creation of stability is one of the main goals; and the stability of network practices is a precondition of performing cyber-security regions. There are many attempts to turn security practices into ‘best practices’, which are then stabilised and standardised. Big players like the US or the European Union focus a lot of their attention on harmonisation, of how cyber-security aspects are defined, how they are talked about, how knowledge about them is shared, and, most importantly, how they are measured.Footnote 80 Beyond that, there are several attempts to stabilise the rules of engagements for cyberspace conflicts.Footnote 81
Fluid space – malware as embodied uncertainty
As mentioned, the ontology of malware ‘is fixed only in and through the unfolding of their affective relations’Footnote 82 because it always incorporates several possible becomings in its code. This means that, malware also enacts fluid space as soon as it is discovered as a logical consequence of the uncertainty it brings when it starts to deliver effect. In the interval between depunctualisation and categorisation, otherwise stable cyber-security practices become (temporarily) fluid, thereby making previous order un-orderly, challenging previous knowledge, and often exposing previous solutions and assumptions as inadequate.
The prime action is to re-establish order through knowledge-creation. If malware is discovered, anti-virus specialists not only identify its ‘signature’ to update anti-virus software accordingly, they will also identify the vulnerability that was used. This information is usually passed on to whoever is responsible for the product with that particular vulnerability (usually software producers), so that they can ‘patch’ (close) it, to prevent future misuse of the same vulnerability. At the same time, they try to uncover its intended purpose and the damage it caused (worldwide) as fast as possible, that is, by reverse-engineering. However, even though malware is always programmed and released intentionally, the shape and type of their manifestations in different topological spaces is not always fully intended or controllable by the creator. The Internet’s first prominent malware (the Morris worm, released in 1988) is typical for the unintended consequences of self-replicating software in networks, both technically and politically.Footnote 83 Malware that is supposed to stay hidden (because its prime purpose is to ‘steal’ data for as long as possible) is usually detected due to faults in the virus’s code. Spill-over effects resulting in digital ‘mass-hysteria’ or strong (over)reactions in government circles are also a very common feature in cyber-security because of the fluidity viruses enact.Footnote 84
More importantly, because malware is ultimately released intentionally by humans, the search for culprits is always part of this knowledge-creation. However, it is often impossible to know with certainty who is responsible for a cyber-attack – at least right after it is discovered – and what the exact intention behind an attack was. Only through careful computer forensics can parts of the puzzle be uncovered. This ‘attribution problem’ is an unavoidable consequence of the technological attributes of the space that malware moves through:Footnote 85 it provides a great deal of anonymity for the technologically apt. Politically speaking, exploits that seemingly benefit states might well be the work of third-party actors operating under a variety of motivations. At the same time, the challenges of clearly identifying perpetrators gives state actors convenient ‘plausible deniability and the ability to officially distance themselves from attacks’.Footnote 86 Malware, then, disintegrates knowledge about ‘the other’, making fluid the boundaries between the threatening and the threatened.
Moreover, the network of measurement, which is necessary to perform regions, routinely breaks down: Attempts to collect and aggregate data beyond individual networks fail due to insurmountable difficulties in establishing what to measure and how to measure it and what to do about incidents that are discovered very late, or not at all.Footnote 87 The implicit knowledge about what might be there, lurking undetected/undetectable within machines and networks has a fear-inducing effect – mostly because it is invisible up until the moment where it reveals its damaging potential, and because there are no defences against it, in other words, established network-practices are powerless against them. This is the moment in which networks start ‘to panic’ and in which their breakdown is not only a failure, but also an outright threat.Footnote 88
Enacting cyber-security: Buffering against Stuxnet
In the previous section, we have shown how cyber-incidents perform different spaces through related cyber-security practices. The same cyber-incident performs regions, networks, and fluid spaces quasi ‘simultaneously’, often fluctuating between them. Cyber-security as a socio-political practice is negotiated within and among the three spaces, each activating different types of operations. In this section, we want to show how these performances are linked to political imaginations and interventions with relevance for security. To illustrate the analytical purchase of our approach more concretely, we conduct a brief case study in this section, using Stuxnet as an example.
But why Stuxnet and what is it? Stuxnet is a computer worm that was discovered around June 2010 and has been subsequently called ‘[O]ne of the great technical blockbusters in malware history’.Footnote 89 For a piece of malware, it is a complex program. Symantec calculated that the malware is about 500kb in size, fifty times as large as the ‘average’ malware,Footnote 90 but still small enough not to attract attention from performance analysts. It is likely that writing it took a substantial amount of time, advanced-level programming skills, and insider knowledge of industrial processes. Therefore, Stuxnet was the most expensive malware ever found at that time. In addition, it behaved differently from malware released for criminal intent: it did not steal information and it did not herd infected computers into so-called botnets from which to launch further attacks. Rather, it looked for a very specific target: Siemens’ Supervisory Control And Data Acquisition (SCADA) systems that are used to control and monitor industrial processes.Footnote 91 Due to these characteristics, Stuxnet quickly became the harbinger of a new chapter in the cyber-security community: The era of highly sophisticated and targeted attacks. Because of its importance in the discourse, it is a prime example for showing how cyber-security practices provide the basis for political imaginations and interventions.
Stuxnet has been analysed and interpreted in technical publicationsFootnote 92 and it has generated a handful of publications on the new manifestation of cyber-war.Footnote 93 The information that is most relevant for this case study, however, is to be found in the technical reports of anti-virus companies and security researchers, newspaper articles that focus on the timeline of preparation and discovery, and on technological blogs. Cyber-security practices related to cyber-incidents are mainly about knowledge-creation. Thus, it is our goal to reconstruct this process in relation to the three spaces as clearly as possible, despite the patchy information available. In order to show how the same cyber-incident performs regions, networks, and fluid ‘simultaneously’, we indicate the relevant space in brackets in the text rather than splitting the analysis up into the three spaces.
The spatiality of Stuxnet
An account of a malware’s performance must start with the moment of depunctualisation – before it is visible, it does not set in motion any effects beyond the purely technical. In Stuxnet’s case, depunctualisation happened in several stages: In June 2010, the Belarussian security company VirusBlokAda discovered Stuxnet on a client’s machine in Iran, because it had been caught in a reboot loop (fluid space).Footnote 94 VirusBlokAda passed the information on to Microsoft, because they realised quickly that the malware used a Window’s vulnerability to enter into a computer of the network, from which it spread to the entire network via other vulnerabilities. Microsoft issued a security advisory about a month later (networked space). Because this is a routine practice, not many people paid attention. In July 2010, however, an Iranian engineer’s computer was accidentally infected; as it was later revealed, due to a programming error in Stuxnet (the worm was not supposed to act that way – most likely because the creators changed the code in order to get better results).Footnote 95 The computer, connected to the Internet afterwards, led to a global spread of the malware (fluid space). Several security firms now noticed the worm and wrote signatures for their anti-virus software (networked space).
In July 2010, the news about Stuxnet went public through the announcement of the illustrious security blogger Brian Krebs,Footnote 96 causing a very high interest among tech-oriented news media and through them started to create quite a bit of alarm in wider policy circles (fluid space). Frank Boldewin, another security researcher, noted in an online security forum that Stuxnet targets specific Siemens control systems.Footnote 97 Five days later, Symantec began monitoring the Command and Control (C&C) traffic of the malware,Footnote 98 in order to observe rates of infection and identify the locations of infected computers (networked space). As of September 2010, the data collected by Symantec showed that there were approximately 100,000 infected hosts worldwide. Data was shown by country, and it revealed that approximately 60 per cent of infected hosts were in Iran (regional space). The concentration of infections in Iran was taken as indication that this was the initial target for infections. In addition, 67 per cent of all infected machines in Iran had a particular Siemens software installed (networked space).Footnote 99
Soon after, Ralph Langner, a German security researcher, stated that Stuxnet was a precision weapon targeting specific facility and that heavy insider knowledge is required for the creation of this worm.Footnote 100 The new Iranian plant Bushehr was first named as likely target (regional space). Bruce Schneier, another noted security expert, broadened the analytical horizon by adding alternative origination theories, such as a research experiment that went out of control, a criminal worm designed to demonstrate a capability, or a message of intimidation to an unknown recipient.Footnote 101
Later in the month, Iranian officials admitted that computers belonging to Bushehr personnel were infected by Stuxnet. In November that year, Iran also temporarily halted the uranium enrichment process at Natanz for unclear reasons – it is later revealed that the number of enrichment centrifuges at Natanz had been constantly dropping since around 2009. At a press conference, Iranian president Ahmadinejad said the malware did target nuclear sites and succeeded in harming a limited number of centrifuges.Footnote 102 A report from December 2010 revealed that the creators of Stuxnet knew the rotating frequency of uranium enrichment centrifuges prior to release of the worm. It seemed likely that Stuxnet physically damaged those particular centrifuges by speeding them up to a frequency that physically damaged the rotor.Footnote 103
As soon as several unusual aspects about the malware became public knowledge, attempts to ‘attribute’ the malware began to dominate the discussion (fluid space). In September 2010, the word ‘Myrtus’ was found in Stuxnet’s code, which could be read as a reference to the Hebrew word for Esther.Footnote 104 This immediately was taken as proof that it was of Israeli origin (regional space). Others noted that the word could have been inserted as deliberate misinformation, to implicate IsraelFootnote 105 or interpret the ‘Biblical reference’ in Stuxnet’s code as a part of a conspiracy theory.Footnote 106 However, in November 2010, the security researcher Ralph Langner publicly claimed that the culprit was most likely Israel, USA, Germany, or RussiaFootnote 107 – using the familiar ‘cui bono’ logic (to whose benefit) as a basis for this statement (networked space). Alternative interpretations existed at the time, but they did not manage to convince a larger audienceFootnote 108 – not long after, it became accepted knowledge that Stuxnet was launched by the US and Israel (regional space),Footnote 109 who had insight knowledge about the SCADA systems from Siemens.Footnote 110
Debates about this attribution continued among security experts, until a detailed report in the New York Times in June 2012 took an authoritative stance on the attribution question. In this article, David E. Sanger explained how Stuxnet was programmed and released as a collaborative effect between American and Israeli intelligence services (regional space). Sanger does not name his sources but suggests they are high-level American administration officials.Footnote 111 According to Sanger, the effort of developing cyber-sabotage capacities goes as far back as 2006, when President George Bush felt he needed additional options for dealing with Iran. Bush subsequently authorised the effort to send a ‘beacon program’ to Iranian control centres that would map out the network at Natanz (it is suspected that the ‘Duqu’ worm was used for this purpose).Footnote 112 Apparently, it took months until the beacon had collected enough evidence and transmitted data back to the NSA. After making sense of the information, America started to develop the worm. According to Sanger, collaboration between Israel started out of two motives: First, Israel had been pressuring the US for authorisation on a military strike, which the United States had denied. Stuxnet was seen as an acceptable sabotage substitute to calm Israeli authorities down. The second reason was the expertise of Israeli intelligence was needed to develop the worm. Knowing the Iranians used old and unreliable P1-Centrifuges from Pakistani nuclear black market chief AQ Khan, American developers secretly constructed replicas of the Natanz network in a laboratory in Tennessee and tested the worm until it worked well. In 2013, Edward Snowden confirmed in an interview published in Der Spiegel that the NSA and Israel co-wrote Stuxnet.Footnote 113
From technical knowledge to political effects
Tying one malware to broad political effects is neither advisable nor possible – clearly, there is no single cause and effect relationship to be found here. In addition, stable knowledge about the malware is sparse – and authorship for the malware has never been confirmed by any political actor. And yet, the creation and dissemination of the specific technical knowledge presented earlier (generated by standard cyber-security practices and through the logical of the three spaces) is the foundation for attempting to make attributions – and for taking political actions. It is also the basis for a widely accepted version of ‘the truth’ to emerge, which has changed the cyber-security discourse fundamentally.
The analysis of different claims about Stuxnet shows that fluid space has several effects. First, it triggers specialised actors (computer security experts) to re-establish routine and normalcy through dissecting the discovered malware, and by then updating anti-virus software, and by notifying the responsible actors about the vulnerability. This stabilises the networked space. At the same time, fluidity continued around the aspect of ‘attribution’ – in the Stuxnet case for quite a long time. Here, radical uncertainty about knowing the actual deeds and abilities of the enemy ‘Other’ emerges as political threat. In the case of Stuxnet, regional space was created several times through aggregated data. While it was less of a problem to identify the target of the malware (though questions of the exact effects remained unclear), establishing the origin with enough certainty was an elusive quest. And yet, a specific ‘truth claim’ did emerge in an attempt to stabilise this space by using regional space.Footnote 114
As a consequence of malware’s multifaceted performance, there are strong reactions geared towards fighting fluidity as main threat on a more general level. There is a rise of cyber-geopolitics, which builds on the regional performances of cyber-incidents to think control in terms of regions (and territory). Since disruptions to the stability of cyber-security (in the form of malware) materialise first and foremost as disruptions in machines that are situated on specific territories – and these disruptions are caused by objects with a geographical origin (though that origin is sometimes very hard to identify) – the imaginations of modern geography, with its Euclidian presuppositions and spatial anxieties, can easily be translated into the language of cyber-space. This way, malware is made geopolitically relevant and more easily linked to enemy ‘Others’. Malware, then, can be conceptualised as a weapon, aimed at a specific target.
Indeed, in military circles, cyberspace is depicted as a battlefield (or rather, battlespace) on which a covert war can be fought or, depending on the person’s belief, is already being fought.Footnote 115 Most references to a cyber(ed) battlefield are literal references to an actual (geographical) battlefield. Military terms like cyber-weapons, cyber-capabilities, cyber-offence, cyber-defence, and cyber-deterrence suggest that cyberspace can and should be handled as an operational domain of warfare like land, sea, air, and outer space, working under the same premises; and cyberspace has been officially recognised as a new domain in US military doctrine.Footnote 116
Ultimately, this type of politics is about the establishment of territoriality and borders in the virtual realm, about nationally owned space and a nationally definable space, based on physical infrastructures. The keeper of the peace in bordered cyberspace naturally is the military, coupled with the intelligence service (more specifically, cyber command units). This counter-modern image invokes the closured, safe cocoon of a delimited and thus defendable and securable place, newly reordered by the state as the sole real guarantor of security. The hope is that the emergence of a Westphalian cyber-order will bring back the certainty of the Cold War, so that ‘deterrence, wars, conflict, international norms, and security will make sense again as practical and historical guides to state actions and deliberations’.Footnote 117 Fluid space, therefore, is seized upon to forcefully re-establish regional space, with its desired stability, certainty, and security. Thus, even though stability is conceptualised as the only acceptable state of social space and fluidity is seen as the embodiment of a severe threat that disrupts the homogeneity of networks, and thus, of security, fluidity also fundamentally makes possible the re-establishment of regions and networks.
Conclusion
The main aim of this article was to propose a new theorisation of cyber-security through ANT concepts. One of the central insights of ANT is that material artefacts such as computers and software are active not passive entities. In this sense, cyber-security is both a process and an outcome of the topologies through and by which the threats to digital security are enacted. The argument was put forward in three steps: First, the article situated cyber-security research in the wider field of security studies. It showed that there is an insufficient grasp of how cyberspace defines itself in heterogeneous ways, and the impacts this has on cyber-security, as practice and in practice. Second, the article argued that cyber-threats (in the form of malware or cyber-incidents) should be investigated and addressed within the space(s) they enact, which can take three overlapping forms: networks, regions, and fluids. Third, in order to account for the consequences of this multi-spatiality, we examined the ‘setting-into-work’ of the politics of malware. This article has demonstrated that malware challenges the consistency of networks and the sovereign boundaries set by regions and at the same time re-enact them. Further, even when malware is seized upon to perform regions and networks, the ultimate goal of such social practices is the creation of stability (and predictability) of interactions. These mediators thus can be used to create certainty about the good and the bad, and are used to bring back order to a realm where there seems to be constant change. At the same time, cyber-incidents constantly threaten this order through the fluidity that they embody and the radical uncertainty that they bring. They emerge as reciprocally disturbing and constituting, causing multiple political effects.
The impact of the theoretical arguments discussed in this article exceeds and overflows the field of cyber-security. For instance, ANT redeems the role played by things in the production of a specific (social) order. In this perspective, humans and nonhumans are implicated in the generation of practical knowledge, and this can have important ramifications in fields as diverse as ‘nuclear’ weapons, ‘environmental’ security, or critical ‘infrastructures’. This article has argued that the contribution of humans and nonhumans to knowledge processes supports methodological symmetry, that is, methodological assumptions that apply to humans can be brought to bear on nonhumans. In part, this is what makes both humans and nonhumans actants, in the specific sense that they are ‘entities embedded in practice configurations whose interactions generate … knowledge’.Footnote 118 Taken seriously, this would mean that world politics is organised around three lines of forces – material, social, and semiotics – which constitute ‘technologies of knowledge’. For cyber-security, a focus on technical materialities and a practice-oriented view on the performance of malware promise to provide much needed explanations for how the politics of cyber-security work, with a specific focus on ‘attribution’. Nonetheless, additional empirical research is likely to shed further light on the role of cyber-incidents in shaping threat perceptions and ultimately policy responses. Moreover, it will enable researchers to focus more specifically on questions of power and knowledge in the field, insisting not only on already available knowledge, but also on knowledge-in-the-making. Finally, this change of emphasis could be relevant for security studies more generally. Specifically, an ANT-driven analysis, which brings human and nonhuman agencies into alliance, develops the importance of objects and materiality, and strengthens the focus on everyday practices, can help researchers understand links between (security) incidents of any kind and politics in different ways. In particular, the multiplicity of spaces performed by objects always leads to heterogeneous, even contradicting policy responses. Therefore, an ANT-approach makes it possible to deal with this as the norm, rather than having to choose a focus on one particular type of political intervention. In other words, the focus on objects and moments of depunctualisations of ‘the normal’ opens up possibilities for the study of security issues not easily accessible through textual enquiries.
Biographical information
Thierry Balzacq is the Scientific Director of the Institute for Strategic Research (IRSEM), the French Ministry of Defense’s research centre. He is also Tocqueville Professor of International Relations at the University of Namur in Belgium and Adjunct Professor at Sciences Po Paris. He holds a PhD from the University of Cambridge. He is a former Honorary Professorial Fellow at the University of Edinburgh, where he was also Fellow for ‘outstanding research’ at the Institute for Advanced Studies in the Humanities. In 2015, he was awarded a Tier 1 Canada Research Chair in Diplomacy and International Security – ‘Tier 1 Chairs are for outstanding researchers acknowledged by their peers as world leaders in their fields.’ His most recent articles have appeared in International Relations and International Studies Review. He is the co-editor (with Myriam Dunn Cavelty) of The Routledge Handbook of Security Studies, 2nd edition. His current research is on French Grand Strategy, rationality and IR, and the aestheticisation of violence.
Myriam Dunn Cavelty is Senior Lecturer and Deputy for Teaching and Research at the Centre for Security Studies, ETH Zurich, Switzerland. Her research focuses on the politics of risk and uncertainty in security politics and on changing conceptions of (inter)national security due to cyber issues. She is the author of Cyber-Security and Threat Politics: US Efforts to Secure the Information Age (Routledge, 2008) and co-editor among others of The Routledge Handbook of Security Studies, 1st and 2nd editions (Routledge, 2012 and 2016); Securing the Homeland: Critical Infrastructure, Risk, and (In)Security (Routledge, 2008); and Power and Security in the Information Age: Investigating the Role of the State in Cyberspace (Ashgate, 2007). Her works have appeared in various outlets, including Security Dialogue, International Political Sociology, and International Studies Review. In addition to her teaching, research and publishing activities, she advises governments, international institutions, and companies in the areas of cyber security, critical infrastructure protection, risk analysis, and strategic foresight.
Acknowledgements
Earlier versions of this article were presented at the Millennium Annual Conference (London, 2012), the International Relations Seminar at the University of Edinburgh (Edinburgh 2013), the APSA Annual Meeting (Washington DC, 2014), and the Theory Seminar at the Norwegian Institute of International Affairs (Oslo, 2014). We would like to express our gratitude to the organisers, participants, and discussants of these events – in particular Geoffrey Herrera, Xavier Guillaume, Aida A. Hozic, and Karsten Friis – for their insightful comments. We also want to thank the anonymous reviewers and Anne Harrington.