Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-06T01:01:25.274Z Has data issue: false hasContentIssue false

A theory of actor-network for cyber-security

Published online by Cambridge University Press:  12 May 2016

Thierry Balzacq*
Affiliation:
Scientific Director of the Institute for Strategic Research (IRSEM) Tocqueville Professor of International Relations, University of Namur, Belgium
Myriam Dunn Cavelty
Affiliation:
Senior Lecturer, ETH Zurich
*
*Correspondence to: Professor Thierry Balzacq, 1 Place Joffre, 75700 Paris SP07. Author’s email: thierry.balzacq@defense.gouv.fr
Rights & Permissions [Opens in a new window]

Abstract

This article argues that some core tenets of Actor-Network Theory (ANT) can serve as heuristics for a better understanding of what the stakes of cyber-security are, how it operates, and how it fails. Despite the centrality of cyber-incidents in the cyber-security discourse, researchers have yet to understand their link to, and affects on politics. We close this gap by combining ANT insights with an empirical examination of a prominent cyber-incident (Stuxnet). We demonstrate that the disruptive practices of cyber-security caused by malicious software (malware), lie in their ability to actively perform three kinds of space (regions, networks, and fluids), each activating different types of political interventions. The article posits that the fluidity of malware challenges the consistency of networks and the sovereign boundaries set by regions, and paradoxically, leads to a forceful re-enactment of them. In this respect, the conceptualisation of fluidity as an overarching threat accounts for multiple policy responses and practices in cyber-security as well as attempts to (re-)establish territoriality and borders in the virtual realm. While this article concentrates on cyber-security, its underlying ambition is to indicate concretely how scholars can profitably engage ANT’s concepts and methodologies.

Type
Research Article
Copyright
© British International Studies Association 2016 

Introduction

In recent years, highly publicised cyber-incidents with names like Stuxnet, Flame, or Duqu have solidified the impression among political actors that cyber-incidents are becoming more frequent, more organised and sophisticated, more costly, and altogether more dangerous, which has turned cyber-security into one of the top priorities on security political agendas worldwide.Footnote 1 There is a strong degree of consistency in what drives this threat perception: The vulnerabilities of a ‘sprawling, open country knitted together by transportation, power and communication systems designed for efficiency not security’Footnote 2 and the disembodied adversaries that are seen as likely to take advantage of these vulnerabilities through the anonymity provided by information networks.

This view has two consequences. First, it often leads to cyber-security being presented as operating at one spatial plane, that of a network. The term network conjures up ideas of structured interconnections with regards to the texture of cyberspace.Footnote 3 From this follows a specific understanding of what is at stake in cyber-security: On the one hand, the protection of the mobility of data, and on the other, the stability of networks of relations that compose and sustain cyberspace. Ultimately, the network is controllable – and such control is coveted. Second, the network is seen just as a medium through which so-called ‘malware’ (malicious software), which causes cyber-incidents transits.

Some scholars argue that there is not one, unique cyberspace, but many spaces. For instance, Nick Bingham touts the illusion that ‘cyber-space as a singular exists at all’.Footnote 4 Stephen Graham suggests that ‘cyberspace … needs to be considered as fragmented, divided and contested multiplicity of heterogeneous infrastructures of actor-networks’.Footnote 5 And yet, all these scholars still tend to assume that the different spaces thus enacted remain ‘networks’. The main task of cyber-security experts in this view is to account for how and under what conditions ‘cyber-threats’ in the form of malware sail through different networks, and develop strategies to counter them effectively.

In this article, we argue that this view is too simplistic. We show that cyber-incidents have multifaceted spatial effects, which condition both different understandings of cyber-security and the kind of operations it commands or accommodates. To develop our argument about how cyber-incidents and international politics are related, we draw on Actor-Network Theory (ANT) and its analytical toolbox. ANT is a heterogeneous conglomerate of ideas, with origins in Science and Technology Studies. It follows a relationalist tradition and focuses on ‘dynamic relations between scientific and political sites’,Footnote 6 it rejects the dualism between the social (human) and the material (nonhuman) in the study of the social and it has close ties to the post-structuralism of Foucault and Deleuze, but tends to be more empirical.Footnote 7 ANT is currently gaining prominence in International Relations and security due to the new types of issues and research questions brought on by the ‘material turn’Footnote 8 – which signifies an interest in the importance of artefacts, natural forces, and material regimes in social practices and systems of powerFootnote 9 – as well as the ‘practice turn’Footnote 10 – which takes organised forms of doing and saying (‘practices’) as the smallest unit of analysis rather than actors or structures.

We base our analysis on work by ANT-scholars John Law, Annemarie Mol, and Vicki Singleton in particular. These researchers suggest that objects come in various configurations, each associated with spatial processes that prompt or support a recombination of relationships.Footnote 11 Specifically, objects can emerge as ‘volumes’ thriving in a regional Euclidian space, as ‘networks of relations’ configuring a network space, and as ‘flows’ that continuously adapt their shape in order to generate a fluid space. The article suggests that the study of cyber-security should emphasise the objects that circulate within different (cyber)spaces, thereby co-creating them. In cyber-security, these objects are malware. The different spaces created by malware have implications for the way we conceptualise cyber-security, the processes that bring actors together, and the type of interventions that are made possible.

The article combines conceptual groundwork with an examination of cyber-incidents or malware. It has five sections. Section I contextualises the discussion of cyber-security. We situate cyber-security’s concerns in the wider ambit of ANT in order to highlight both how its material background is under-theorised and the effects on our understanding of cyber-security is not sufficiently considered. Section II conceptualises cyber-incidents through ANT-concepts. By unpacking the concept of malware, we claim that ANT assumptions on spatiality enable us to characterise cyber-incidents as active agents of change. In particular, Section III examines these questions with respect to the nature and functions of different spaces generally and the character of mediators specifically. Section IV focuses on the enactment of cyber-security through a discussion of malwares’ performances. In Section V we show the analytical purchase of this typology with the help of a brief case study. We illustrate how an individual malware (Stuxnet) worked through these different spatialities and how this resonates at the level of politics. In the conclusion, we reflect on the consequences of such a reconceptualisation for cyber-security and security studies more generally.

Situating cyber-security research in security studies

The majority of books, articles, and reports on cyber-security to date remain policy-oriented and problem-solving. The two main questions tackled are ‘who (or what) is the biggest danger for an increasingly networked nation-society-military-business environment’ and ‘how to best counter the threat’.Footnote 12 Theoretically guided or empirically oriented academic research is still relatively rare.Footnote 13 In specific, despite the significance of cyber-incidents in the larger policy discourse, we have yet to understand the effects of them on politics, in particular their role in shaping threat perceptions and ultimately policy responses.

Still, it is possible to distinguish between two bodies of literature of potential relevance: The first is produced by the ‘Munk School’, which has consistently focused on issues like (electronic) surveillance and censorship and is thus mainly concerned with the creation of more insecurity by (state) actors through cyber-means.Footnote 14 It has, however, not theorised the link between cyber-incidents (even though it looks at them empirically) and politics. The second, situated in the larger vicinity of critical security studies, is a body of literature by scholars that have used frameworks derived from (or inspired by) Securitisation Theory to see how different actors in politics have tried to argue the link between the cyber-dimension and national security.Footnote 15 In a similar vein, some recent articles have focused on metaphors in the cyber-security discourse to explain political response.Footnote 16 These texts support observations made elsewhere that the process of securitisation in a given socio-political community is not restricted to one setting and one type of audience only, but often involves several, overlapping and multiple ones,Footnote 17 or that there are different political functions of and strategies behind security utterances.Footnote 18

Overall, critical security studies’ engagements with cyber-security remain analytically thin: First, cyber-security is a type of security that enfolds in and through cyberspace, so that the making and practice of cyber-security is at all times constrained and enabled by this environment. This factor is ignored by most of the literature. Second, scholars often focus solely on speech acts by political elites and therefore do not see how these discursive practices are facilitated or thwarted by preceding and preparatory practices of actors that are not as easily visible, also outside of governments.Footnote 19 Third, due to the emphasis on official statements by ‘the heads of states, governments, senior civil servants, high ranked military, heads of international institutions’,Footnote 20 existing scholarship grasps a limited expression of high urgency cyber-security (usually cyber-war).

Cyber-security is both less and more, however. It is less because it is not only and not very often about situations of greatest urgency. It is a multifaceted set of technologies, processes, and everyday practices. And it is more, because multiple actors use different threat representations employing differing political, private, societal, and corporate notions of security to mobilise (or demobilise) different audiences. Cyber-security is co-produced by every private computer user, by computer security specialists in the server rooms of this world, by programmers, by Chief Information Officers (CIOs) or Chief Executive Officers (CEOs) deciding on cyber-security investments, by security consultants, by cyber-forensics, by regulatory bodies and standardisation organisations, and only last by politicians and other government officials that interpret digital events and (re)act to them in the form of verbalised expectations and fears or ultimately, policies.

In the entirety of the literature, cyber-incidents are recognised as important – but they are not conceptualised as active drivers of cyber-politics.Footnote 21 Traditional security studies see cyber-threats as objectively given: matters of cyber-war, for example, can be identified as such, given specific ways of defining it.Footnote 22 Why cyber-incidents are linked to different threat categories, from which specific responses are deducted, has not been of interests. Constructivist or critical research in turn has looked at how security-meanings are constructed through the connection of the cyber-prefix to well-known threat categories, and how concepts such as ‘cyber-war’, ‘cyber-terror’, or ‘cyber-crime’ generate political effect. However, that literature fails to address how specific interpretations of cyber-incidents happen first and foremost around the material ‘realities’ of computer disruptions in technical communities and how these interpretations then serve as a basis for political action.

This article aims to close part of this gap by focusing on how cyber-incidents shape, maybe even transform cyber-security politics by stabilising or challenging different kinds of (political) imaginations and interventions. We claim that in order to understand the role (and agency) of cyber-incidents, we must understand what they do – how they perform – in their environment, before they are interpreted by actors in political processes. Therefore, we conceptualise cyber-incidents as deliberate disruptions of routine and everyday cyber-security practices, designed to protect networks, computers, programs and data from attack, damage, or unauthorised access. This definition follows the standard understanding of cyber-security in the technical realmFootnote 23 and highlights the importance of taking the technical-material (referent) object in this danger discourse (computers and computer networks) and the actors directly involved in securing them seriously.

Conceptualising cyber-incidents through ANT

IR and security literature is not yet overly familiar with ANT’s vocabulary, though there are recent attempts to evaluate the most effective ways in which ANT could extend IR’s analytical depth.Footnote 24 Clearly, the growing importance of materiality, process philosophy, and a focus on practicesFootnote 25 puts ANT in an interesting position within IR, as it combines those concerns with a resistance to the anthropocentrism that used to characterise theories of IR.

The goal of this section is to offer a brief overview over some of ANT’s concepts that help us understand under-researched aspects of cyber-security. In many ways, ANT is difficult to define and delineate precisely, for two main reasons: The first is that ANT comes in different names and shades: ‘sociology of translation’, ‘actant-network’, or ‘actant rhizome ontology’, are some of the common labels often used to designate ANT. The second reason is that ANT is the product of a sometimes-bewildering range of theoretical lineages. Therefore, capturing ANT’s twists and turns in any depth is impossible (and probably unnecessary) within the context of this article. We therefore only emphasise those concepts and principles that seem most promising for an analysis of cyber-security – without any claim for comprehensiveness. In order to identify them, we first look to conceptualise and ultimately define cyber-incidents. We then introduce the main concepts from ANT-research that help us to further understand the most important elements of such a conceptualisation.

The features of cyber-incidents

We conceptualise cyber-incidents as deliberate disruptions of normalised cyber-security practices by malware, leading to different effects on (political) imaginations and interventions.Footnote 26 In what follows, the different elements are unpacked: deliberate disruptions, normalised cyber-security practices, malware, and effects.

In cyberspace, operations are conducted with the help of software. In general terms, software consists of ‘code’ – a distinct number of lines of computer language, a ‘program’. Programs are written to carry out physical computations – for this to work, a physical implementation of the language is required. Thus, every piece of software must spell out how the constructs of the language (abstract mathematical notions in the form of syntax/semantics) are to be physically instantiated (implementation). As a consequence, programs always have an effect and code cannot just exist as an amalgamation of symbols, it always (also) exists in its execution or in its becoming, in other words, in its performance.Footnote 27

In fact, it is only through its performance that software becomes perceptible and politically relevant, because without it, ‘code is imperceptible in the phenomenological sense of evading the human sensorium’.Footnote 28 Furthermore, this performance is not purely technical. Software also has a distinct relationality to ‘the outsides in which it is embedded’,Footnote 29 be it (for example) to interfaces of mobile technology, games, or browsers, but also highly abstract concepts such as ‘the economy’, ‘national security’, etc., because its purpose is to do something. Moreover, code directly effects, and literally sets in motion, processes in the technical, but also the social, and political realm – and thereby can have ‘immediate and political consequences on the actual and virtual spaces’.Footnote 30

Importantly, the ‘goodness’ or ‘badness’ of software cannot be determined before said performance and its interpretation because it always incorporates a range of possible becomings in its code.Footnote 31 However, cyber-incidents do not happen in a meaning-free space – many stabilised social practices and meanings exist that shape and restrict the way they can be interpreted. Because computer operations that happen without the knowledge or consent of the owner of the computer have been socially constructed as illegal and negative over the years,Footnote 32 software used for these operations is called malware, a portmanteau word for malicious software, which presupposes malicious intent. In general, malware is deliberately designed to interfere with computer operations, that is, to record, corrupt, or delete data; or to simply spread itself to other computers and throughout cyberspace.Footnote 33 Importantly, however, malware could also be seen as means to a ‘good’ end, at least in the eye of the person using it to achieve certain goals:Footnote 34 stealing data that is kept secret by the government because it contains discriminating information, and then demanding justice. Because the intent of the person causing the cyber-incident is almost impossible to know with enough certainty when the incident occurs (and in many cases is never revealed), intent plays a marginal role in the classification of cyber-incidents or is simply inferred, by using the cui bono logic (to whose benefit).

Quasi all the immediate responses to malware are reactive: the malware’s performance always comes first. The reason for this is that malware ‘exploits’ open (often unknown) vulnerabilities in the information infrastructure. Due to a variety of (historical) technological, economical, and social reasons, vulnerabilities can be exploited abound in cyberspace.Footnote 35 Those actors interested in programming and using malware will pick those vulnerabilities that serve their specific purpose or intent best, and will write a program to that end – whereas software companies interested in selling commercial products have almost no incentive (and sometimes not even the ability) to patch them in advance.Footnote 36

The moment in which malware becomes perceptible, sometimes even literally visual through performance, is when the previously routine processes and practices of cyber-security become destabilised or even break down. We understand cyber-security as a multifaceted set of practices designed to protect networks, computers, programs and data from attack, damage or unauthorised access – in short, it is standardised practices by many different actors to make cyberspace (more) secure. As we are going to show in more detail later, reactions of different actors to malware and the breakdown of routine that follows are multifaceted, just as malware is multifaceted – they depend on the actual-conceived impact of the computation and the interpretations that follow, which in turn are based on different practices of cyber-security.

Introducing ANT concepts

There are (at least) three points from the ANT literature that highlight important aspects of the link between cyber-incidents and their effects: agency of nonhuman entities and their role in upholding or changing practices (concept of actants); what happens in the moments of break-down of normalised processes (concept of depunctualisation); and the relationship between objects and space (performance of spaces). The last is the most important aspect of our theory.

Actants:ANT rejects the dualism that tends to separate the social (human) from the material (nonhuman): human and nonhuman entities can equally initiate action. Under the principle of ‘generalised ontological symmetry’ different kinds of entities (humans and nonhumans) are involved in relational productive activities.Footnote 37 On the one hand, it means that humans and nonhumans share the same capacity for agency. On the other hand, it means that both human and nonhuman meanings are ‘effectively generated by a network of heterogeneous, interacting materials’.Footnote 38

In ANT-terminology, entities that can change actions or practices are called ‘actants’. ANT knows different types of actants, differentiated by how they constitute specific networks by virtue of their relations with other actants.Footnote 39 The most interesting actants for ANT-scholars are mediators, because they ‘render the movement of the social visible to the reader’Footnote 40 and they always affect whatever flows through them. Malware understood as mediator focuses our attention on how it circulates and affects circulation in cyberspace. It also allows us to give malware transformative ‘agency’ of its own, detached from the ‘intent’ of the person who wrote the code.

De/punctualisation: Using ANT concepts, we can argue that social order in cyberspace is produced by the heterogeneous relations within and through relevant networks. The ultimate aim of cyber-security as a practice is to stabilise these networks, which are there to execute a specific performance, namely the uninterrupted provision of specific data flows for the efficient functioning of the economy, society, the state, etc. The success of any actor-network is usually related to the degree to which it does not appear to be a network that demands effort for keeping it together, but rather a coherent, independent entity (punctualisation).Footnote 41 This desired state, however, is repeatedly challenged by cyber-incidents. In ANT-terms, these moments of disruptions are called depunctualisation Footnote 42 because they make network performances ‘break down’, at which point the different parts of a network become visible to the observer.Footnote 43

These moments of disruption brought on by cyber-incidents as mediators are not only of key importance for the study of cyber-security, but they also invite for engagement with methodological issues associated with using ANT in new, unfamiliar settings. In particular, the focus on (lab) practices that foundational ANT research was concerned with comes with the demand to ‘follow the actors’, observe, ‘as far as possible, what they do as much as what they say’.Footnote 44 Hence, variations of ethnography have often been the method of choice for ANT.Footnote 45 However, ethnographical methods are bound to encounter major obstacles when it comes to security topics – for instance, what to do if there is no access to actors/practices, that is, given the level of secrecy that is typical for national security issues? ANT’s particular emphasis on nonhuman objects coupled with moments of depunctualisation opens up opportunities for the study of things not easily observable otherwise.Footnote 46

Spatial Performance: For Mol, Law, and Singleton, ‘spaces are made with objects’,Footnote 47 or in other words, actants perform space(s). The task of ANT is to account for the circulation of objects and the structures of relations they activate. In other words, all social relations are complex assemblages of socio-technical entities and any phenomenon derives its form and content from the web of relations to which it partakes. In this light, entities are assembled and sustained through practical activities, which form and happen in networks.

By situating the concept of practically formed network at the centre of its investigation, ANT parts way with classical geography in three significant respects. First, it challenges imaginations or associations of the classical Euclidian metrics mapped on position, proximity, and distance as explanatory foundations: places are the effects of distinctive relations.Footnote 48 Second, whereas classical geography tends to conceive spaces as a pure container for objects, ANT reconciles spaces and objects, arguing that spaces are performed by and through objects. Therefore, the separation between objects and spaces is artificial. Likewise, ANT does not treat the difference between ‘digital’ and ‘physical’ spaces as a matter of essence, but as the expression of their specific framing properties. Third, Law and others suggest that objects themselves come in various configurations, each associated with spatial processes that prompt or support a recombination of relationships. Specifically, objects can emerge as ‘volumes’ thriving in a regional Euclidian space, as ‘networks of relations’ configuring a network space, and as ‘flows’ that continuously adapt their shape in order to generate a fluid space.Footnote 49

The emergence of objects relative to the enactment of space is crucial to understanding topology’s analytical importance for cyber-security. If actants perform ‘several kinds of spaces in which different “operations” take place’,Footnote 50 then cyber-incidents should be read and ultimately tackled in the spaces they build themselves, not in the one that is supposed to predate their enactment. Furthermore, if different objects can enact various spaces, then we can assume that no such thing as ‘the’ cyberspace exists. At the same time, cyberspace is not exclusively derivative of the sum of threats involved; nor is it a simple function of the actants that are put into relations. Rather, an approximate (but imperfect) image is to say that cyberspace exists by virtue of what circulates within its meshes. Malware therefore co-constitute notions of cyberspace and cyber-security.

With these ANT-concepts in hand, we can now reformulate our initial definition of cyber-incidents (deliberate disruptions of normalised cyber-security practices by malware, leading to different effects on political imaginations and interventions) like this: Cyber-incidents are depunctualisations of cyber-security networks by mediators in the form of malware, with effects in regional, networked, and fluid spaces. How these three topologies look like and how they interact is specified in the next section.

Topologies and security

Regions, networks, and fluids come with distinct ways of envisioning order and disorder, based on the relationships between different objects that form the respective space. Therefore, they also come with differing notions of what a threat is and how to secure against it. At the same time, the three spatialities are interconnected in fundamental ways: While any space attempts to situate itself as the ‘other’ of alternative spaces, it is in fact profoundly linked to the existence of these spaces.

Regional topography has shaped IR imagination for a long time and in particular through the invention and institutionalisation of borders and sovereignty.Footnote 51 Regions are the most familiar, and straightforward spaces that IR and security scholars encounter. Regions connect and unite what is close and draw boundaries around elements that belong together. In a regional space, divisions between inside and outside are strict, places are exclusive, and overlaps between locations are not tolerated. Regions cluster objects together. Their primary aims if not their results are to suppress or minimise the differences among objects that reside inside and, correlatively, to play out the differences with what lies elsewhere. Those differences are meant to be solid.

At first sight, networks undermine most regions’ basic assumptions, in part because networks establish relationships between elements that, in regions’ terms, are distant on the map. Put more generally, the localisation of objects does not determine their proximity and, as such, boundaries are not decisive in drawing out objects’ identity. ‘Networked threats’ are a recurrent topic in the security discourse, wherein networked forms of organisation are seen as direct challenge to hierarchical forms of organization’,Footnote 52 seeing that they tend to ‘represent a threat to the spatialized forms of intelligibility and control’.Footnote 53 In ANT-terms, a network space is generated by a network-object. Because this claim can easily be misinterpreted, it is worth repeating that for ANT, many objects, from texts to vessels through software, are ‘networks’. In order to preserve their integrity, they depend on a stable structure of relations between their internal components and the external configuration of interactions they fold in. To move as a vessel from Amsterdam to Lagos (Euclidian space), the ‘relative syntactical positions of the vessel’ (network-space) have to be held together, otherwise the network collapses.Footnote 54

On closer inspection, then, networks replicate regions’ concerns with the ability of the object to preserve its integrity when it displaces itself from one location to another;Footnote 55 indeed, there is a relational isomorphism between regions and networks.Footnote 56 Conceived in this way, networks sustain what David Harvey calls ‘cogredient’, that is, ‘the way in which multiple processes flow together to constitute a single constant, coherent, though multi-faceted time-space system’.Footnote 57 The security of a network is very much about ‘keeping everything in its place’.Footnote 58 In this context, objects are perceived to be threatening if they disrupt either the ‘cogredient’ or the functional integrity of the network. If that happens, the network-object loses its coherence, and the syntactical relations that held it rigid are henceforth subject to constant changes; in fact, everything becomes variable. This is the realm of fluid spaces and fluid objects.

There are no clear boundaries in fluid space and the objects that are generated and generate it are not well-defined and not clearly visible. It also is a ‘world of mixtures’,Footnote 59 in which previously separated categories, like cause and effects, or good and bad, are intermingled. Unlike networks, objects within a fluid space do not depend on one another. Networks tend to crumble if any of their constitutive parts is detached from the relational architecture that sustains them. One important aspect of this is what a network does when it encounters such challenge; how it tries to maintain the identity of its elements or how other networks, with different characteristics, take over.Footnote 60 By contrast, fluids are more resilient to the changing character of their objects, since order is not at all important in this space.

There are important differences between networks and regions to be sure, but security in both spaces is defined as stability and immutable continuity. In fact, networks and regions preserve their continuity by identifying crucial centres or points of vulnerability that must be defended against intrusions. Security is fundamentally about protecting the ‘obligatory points of passage’.Footnote 61 Fluids on the other hand have a very singular complexion, which has implications for the way security is understood therein. In fluid spaces, as Mol and Law put it:

there is no single standpoint to be defended in order to preserve continuity … For since continuity has nothing to do with the integrity of territory in a fluid space, there are no fixed frontiers to be patrolled. Neither is there need for police action to safeguard the stability of elements and their linkages – for there is no network structure to be protected.Footnote 62

As they infiltrate other spaces, fluids absorb networks and regions, though usually in part and rarely in total. Sometimes, networks and regions melt into fluid spaces. Most relevant for security studies:

networks tend to panic when they fail to secure network homeomorphism – at which point what I am claiming to be the … necessary fluidity of objects to networks becomes both visible and Other, represented as a failure and therefore a threat.Footnote 63

Spaces and the occurrence of cyber-disruption

In this section, we show how cyber-incidents and cyber-security practices are interlinked – more specifically, how malware manifests itself in and actively performs three different topologies: regions, networks, and fluid space. The actors involved in these practices are mainly private sector actors with a computer science background. The separation into these different regions is somewhat artificial, because the respective performances are closely interlinked and often happen almost simultaneously, but they serve an analytical purpose.

Regions – the manifestation of malware in physical space

The performance of regions in cyber-security is linked mainly to the manifestation of malware in physical space, or rather, inside computers or other hardware. Information infrastructures – computers, servers, mobile phones, tablets, etc. – are situated in clear and identifiable geographic locations, inside bordered sovereign territory. Indeed, even though part of cyberspace might be hard to ‘grasp’ because we see them as ‘virtual’, it is still fundamentally grounded in physical reality, in ‘the framework of a “real” geography’.Footnote 64 Physical network infrastructures, which ensure the flow of data from one physical node to another such as fibre optic cables, are inscribed in Euclidian space. Malware travels through these cables before it becomes visible first through its (technical) effect on a machine with a specific vulnerability that it can exploit, and second through various cyber-security techniques aimed at preventing, detecting, and removing malware.

If a new type of malware depunctualises standard practices, computer specialists working for anti-virus companies identify the program code and then update their software, which runs on millions of machines worldwide, with that information. If the person responsible for the security of the machine updates their version of the anti-virus software as regularly as is expected of them, the local computer is now equipped to detect the malware (either while in transit or already ‘on’ the computer), based on the now known patterns of data within its executable code. The malware, which has been assigned a specific signature in this process, can now be isolated and removed – it has been given a new type of visibility, which allows it to be traced, identified, deleted, counted, classified, etc.Footnote 65 Some malware that has not yet been given a signature (or constantly changes its signature in an attempt to dodge discovery) but is similar to already known malware can be identified with heuristic approaches, which do not look for a specific pattern of data, but ‘bad behaviour’ of software.Footnote 66 This type of defensive software knows from previous digital threats how malicious software acts and will intercept code according to this knowledge.Footnote 67

Cyber-security reports, most prominently produced by the anti-virus industry or specialised consultants, visualise malware based on the collection of data about infected machines, which is then aggregated in terms of infection rates per country. Footnote 68 The practice of aggregating malware infection this way performs a version of the social in which space is exclusive: there are neat divisions with no overlap, based on a comfortable geography of well-known political entities. This allows identifying the ‘good’ from the ‘bad’ as well as the areas that are most in need for intervention. Whereas every locale in which there are computers/computer networks is a (potential) space for cyber-security, the focus on infection rates per country easily translates into regions of in-security: For example, one company lists Taiwan, China, and South Korea as the countries with the highest percentage of malware-infected computers in the world. The second area of cyber-in-security is South America: Argentina, Peru, Brazil, Chile, Colombia, and Venezuela all have ‘above average’ infection rates.Footnote 69 On the other hand, this type of visualisation also helps to single out those countries with the least infection rate as ‘good cyber-citizens’, for example, according to Microsoft’s Security Intelligence Report, Austria, Finland, Germany, and Japan.Footnote 70

Beyond regions based on malware infection, ‘bad’ cyber-behaviour, linked to hot spots of malware production, is also singled out on the basis of reports by private computer security companies and more in-depth research and analysis by the law enforcement community.Footnote 71 Key regions of insecurity were Romania as cyber-crime havenFootnote 72 and China for cyber-espionage and cyber-war activities,Footnote 73 at least before the Snowden revelations in June 2013. Such regions of insecurity then become a specific focus of the international community and various types of political interventions.

Networks – stabilising cyber-security practices

Network spaces are performed by similar cyber-security practices – this creates places that are ‘close’ to one another with similar sets of elements and relations.Footnote 74 There are different overlapping network spaces that emerge: first, a network of infected computers, second, a network of malware removal, and third, networks of behavioural cyber-security norms.

The first is a network of (infected) bodies, computers, all with the same vulnerability and/or type of infection. For example, the computer virus Code Red, released in 2001, exploited an operating system vulnerability found in machines running Windows 2000 and Windows NT. Each and every such machine was thus part of the particular network space Code Red performed. Once infected, the worm made infected machines execute distributed denial of service (DDoS) attacks against some websites, including the website of the White House:Footnote 75 all of these machines were places of sameness at that moment in time. In the current cyber-security debate, one such network of sameness is the ‘botnet’, robot networks of a large number of malware infected machines that perform specific (remote controlled) tasks together, like spamming or DDoS attacks, (most often) without knowledge of or consent by the owner of the machine.Footnote 76

The second network performed by malware is a network of cyber-incidents removal: Anti-virus software, once updated, performs the same tasks on millions of machines worldwide. Importantly, these practices are also the basis for in-security measurement, which creates the data needed for the performance of regions as described earlier. For example, any computer with a Microsoft operating platform or a specific anti-virus software is turned into a similar place; they are close because they are all part of the same measurement network. Even though there are a variety of different methodologies used for how to measure in-security and for how to detect and fight malware, all these practices are used to identify and characterise malware and their effects – and the methodologies are very similar.

As an extension of this similar-type-practice network, all well-known and standardised types of information assurance practices, enacted on a constant basis by computer network personnel even without a depunctualisation effect, perform networks as well. Among them are, prominently, activities related to Information Security standards released by the International Organization for Standardization (ISO),Footnote 77 or risk assessment methodologies that are used and implemented by computer security specialists, like those propagated by ENISA in Europe.Footnote 78 On top of that, cyber-incidents ‘makers’ also perform networks, by forming an organised cyber-crime market operating across the globe, with orderly strategic and operational vision, logistics and deployment.Footnote 79 The same applies to hacker communities or hacker collectives that display similar practices or adhere to the same ‘hacker ethics’.

Importantly, in the cyber-security networked space, the creation of stability is one of the main goals; and the stability of network practices is a precondition of performing cyber-security regions. There are many attempts to turn security practices into ‘best practices’, which are then stabilised and standardised. Big players like the US or the European Union focus a lot of their attention on harmonisation, of how cyber-security aspects are defined, how they are talked about, how knowledge about them is shared, and, most importantly, how they are measured.Footnote 80 Beyond that, there are several attempts to stabilise the rules of engagements for cyberspace conflicts.Footnote 81

Fluid space – malware as embodied uncertainty

As mentioned, the ontology of malware ‘is fixed only in and through the unfolding of their affective relations’Footnote 82 because it always incorporates several possible becomings in its code. This means that, malware also enacts fluid space as soon as it is discovered as a logical consequence of the uncertainty it brings when it starts to deliver effect. In the interval between depunctualisation and categorisation, otherwise stable cyber-security practices become (temporarily) fluid, thereby making previous order un-orderly, challenging previous knowledge, and often exposing previous solutions and assumptions as inadequate.

The prime action is to re-establish order through knowledge-creation. If malware is discovered, anti-virus specialists not only identify its ‘signature’ to update anti-virus software accordingly, they will also identify the vulnerability that was used. This information is usually passed on to whoever is responsible for the product with that particular vulnerability (usually software producers), so that they can ‘patch’ (close) it, to prevent future misuse of the same vulnerability. At the same time, they try to uncover its intended purpose and the damage it caused (worldwide) as fast as possible, that is, by reverse-engineering. However, even though malware is always programmed and released intentionally, the shape and type of their manifestations in different topological spaces is not always fully intended or controllable by the creator. The Internet’s first prominent malware (the Morris worm, released in 1988) is typical for the unintended consequences of self-replicating software in networks, both technically and politically.Footnote 83 Malware that is supposed to stay hidden (because its prime purpose is to ‘steal’ data for as long as possible) is usually detected due to faults in the virus’s code. Spill-over effects resulting in digital ‘mass-hysteria’ or strong (over)reactions in government circles are also a very common feature in cyber-security because of the fluidity viruses enact.Footnote 84

More importantly, because malware is ultimately released intentionally by humans, the search for culprits is always part of this knowledge-creation. However, it is often impossible to know with certainty who is responsible for a cyber-attack – at least right after it is discovered – and what the exact intention behind an attack was. Only through careful computer forensics can parts of the puzzle be uncovered. This ‘attribution problem’ is an unavoidable consequence of the technological attributes of the space that malware moves through:Footnote 85 it provides a great deal of anonymity for the technologically apt. Politically speaking, exploits that seemingly benefit states might well be the work of third-party actors operating under a variety of motivations. At the same time, the challenges of clearly identifying perpetrators gives state actors convenient ‘plausible deniability and the ability to officially distance themselves from attacks’.Footnote 86 Malware, then, disintegrates knowledge about ‘the other’, making fluid the boundaries between the threatening and the threatened.

Moreover, the network of measurement, which is necessary to perform regions, routinely breaks down: Attempts to collect and aggregate data beyond individual networks fail due to insurmountable difficulties in establishing what to measure and how to measure it and what to do about incidents that are discovered very late, or not at all.Footnote 87 The implicit knowledge about what might be there, lurking undetected/undetectable within machines and networks has a fear-inducing effect – mostly because it is invisible up until the moment where it reveals its damaging potential, and because there are no defences against it, in other words, established network-practices are powerless against them. This is the moment in which networks start ‘to panic’ and in which their breakdown is not only a failure, but also an outright threat.Footnote 88

Enacting cyber-security: Buffering against Stuxnet

In the previous section, we have shown how cyber-incidents perform different spaces through related cyber-security practices. The same cyber-incident performs regions, networks, and fluid spaces quasi ‘simultaneously’, often fluctuating between them. Cyber-security as a socio-political practice is negotiated within and among the three spaces, each activating different types of operations. In this section, we want to show how these performances are linked to political imaginations and interventions with relevance for security. To illustrate the analytical purchase of our approach more concretely, we conduct a brief case study in this section, using Stuxnet as an example.

But why Stuxnet and what is it? Stuxnet is a computer worm that was discovered around June 2010 and has been subsequently called ‘[O]ne of the great technical blockbusters in malware history’.Footnote 89 For a piece of malware, it is a complex program. Symantec calculated that the malware is about 500kb in size, fifty times as large as the ‘average’ malware,Footnote 90 but still small enough not to attract attention from performance analysts. It is likely that writing it took a substantial amount of time, advanced-level programming skills, and insider knowledge of industrial processes. Therefore, Stuxnet was the most expensive malware ever found at that time. In addition, it behaved differently from malware released for criminal intent: it did not steal information and it did not herd infected computers into so-called botnets from which to launch further attacks. Rather, it looked for a very specific target: Siemens’ Supervisory Control And Data Acquisition (SCADA) systems that are used to control and monitor industrial processes.Footnote 91 Due to these characteristics, Stuxnet quickly became the harbinger of a new chapter in the cyber-security community: The era of highly sophisticated and targeted attacks. Because of its importance in the discourse, it is a prime example for showing how cyber-security practices provide the basis for political imaginations and interventions.

Stuxnet has been analysed and interpreted in technical publicationsFootnote 92 and it has generated a handful of publications on the new manifestation of cyber-war.Footnote 93 The information that is most relevant for this case study, however, is to be found in the technical reports of anti-virus companies and security researchers, newspaper articles that focus on the timeline of preparation and discovery, and on technological blogs. Cyber-security practices related to cyber-incidents are mainly about knowledge-creation. Thus, it is our goal to reconstruct this process in relation to the three spaces as clearly as possible, despite the patchy information available. In order to show how the same cyber-incident performs regions, networks, and fluid ‘simultaneously’, we indicate the relevant space in brackets in the text rather than splitting the analysis up into the three spaces.

The spatiality of Stuxnet

An account of a malware’s performance must start with the moment of depunctualisation – before it is visible, it does not set in motion any effects beyond the purely technical. In Stuxnet’s case, depunctualisation happened in several stages: In June 2010, the Belarussian security company VirusBlokAda discovered Stuxnet on a client’s machine in Iran, because it had been caught in a reboot loop (fluid space).Footnote 94 VirusBlokAda passed the information on to Microsoft, because they realised quickly that the malware used a Window’s vulnerability to enter into a computer of the network, from which it spread to the entire network via other vulnerabilities. Microsoft issued a security advisory about a month later (networked space). Because this is a routine practice, not many people paid attention. In July 2010, however, an Iranian engineer’s computer was accidentally infected; as it was later revealed, due to a programming error in Stuxnet (the worm was not supposed to act that way – most likely because the creators changed the code in order to get better results).Footnote 95 The computer, connected to the Internet afterwards, led to a global spread of the malware (fluid space). Several security firms now noticed the worm and wrote signatures for their anti-virus software (networked space).

In July 2010, the news about Stuxnet went public through the announcement of the illustrious security blogger Brian Krebs,Footnote 96 causing a very high interest among tech-oriented news media and through them started to create quite a bit of alarm in wider policy circles (fluid space). Frank Boldewin, another security researcher, noted in an online security forum that Stuxnet targets specific Siemens control systems.Footnote 97 Five days later, Symantec began monitoring the Command and Control (C&C) traffic of the malware,Footnote 98 in order to observe rates of infection and identify the locations of infected computers (networked space). As of September 2010, the data collected by Symantec showed that there were approximately 100,000 infected hosts worldwide. Data was shown by country, and it revealed that approximately 60 per cent of infected hosts were in Iran (regional space). The concentration of infections in Iran was taken as indication that this was the initial target for infections. In addition, 67 per cent of all infected machines in Iran had a particular Siemens software installed (networked space).Footnote 99

Soon after, Ralph Langner, a German security researcher, stated that Stuxnet was a precision weapon targeting specific facility and that heavy insider knowledge is required for the creation of this worm.Footnote 100 The new Iranian plant Bushehr was first named as likely target (regional space). Bruce Schneier, another noted security expert, broadened the analytical horizon by adding alternative origination theories, such as a research experiment that went out of control, a criminal worm designed to demonstrate a capability, or a message of intimidation to an unknown recipient.Footnote 101

Later in the month, Iranian officials admitted that computers belonging to Bushehr personnel were infected by Stuxnet. In November that year, Iran also temporarily halted the uranium enrichment process at Natanz for unclear reasons – it is later revealed that the number of enrichment centrifuges at Natanz had been constantly dropping since around 2009. At a press conference, Iranian president Ahmadinejad said the malware did target nuclear sites and succeeded in harming a limited number of centrifuges.Footnote 102 A report from December 2010 revealed that the creators of Stuxnet knew the rotating frequency of uranium enrichment centrifuges prior to release of the worm. It seemed likely that Stuxnet physically damaged those particular centrifuges by speeding them up to a frequency that physically damaged the rotor.Footnote 103

As soon as several unusual aspects about the malware became public knowledge, attempts to ‘attribute’ the malware began to dominate the discussion (fluid space). In September 2010, the word ‘Myrtus’ was found in Stuxnet’s code, which could be read as a reference to the Hebrew word for Esther.Footnote 104 This immediately was taken as proof that it was of Israeli origin (regional space). Others noted that the word could have been inserted as deliberate misinformation, to implicate IsraelFootnote 105 or interpret the ‘Biblical reference’ in Stuxnet’s code as a part of a conspiracy theory.Footnote 106 However, in November 2010, the security researcher Ralph Langner publicly claimed that the culprit was most likely Israel, USA, Germany, or RussiaFootnote 107 – using the familiar ‘cui bono’ logic (to whose benefit) as a basis for this statement (networked space). Alternative interpretations existed at the time, but they did not manage to convince a larger audienceFootnote 108 – not long after, it became accepted knowledge that Stuxnet was launched by the US and Israel (regional space),Footnote 109 who had insight knowledge about the SCADA systems from Siemens.Footnote 110

Debates about this attribution continued among security experts, until a detailed report in the New York Times in June 2012 took an authoritative stance on the attribution question. In this article, David E. Sanger explained how Stuxnet was programmed and released as a collaborative effect between American and Israeli intelligence services (regional space). Sanger does not name his sources but suggests they are high-level American administration officials.Footnote 111 According to Sanger, the effort of developing cyber-sabotage capacities goes as far back as 2006, when President George Bush felt he needed additional options for dealing with Iran. Bush subsequently authorised the effort to send a ‘beacon program’ to Iranian control centres that would map out the network at Natanz (it is suspected that the ‘Duqu’ worm was used for this purpose).Footnote 112 Apparently, it took months until the beacon had collected enough evidence and transmitted data back to the NSA. After making sense of the information, America started to develop the worm. According to Sanger, collaboration between Israel started out of two motives: First, Israel had been pressuring the US for authorisation on a military strike, which the United States had denied. Stuxnet was seen as an acceptable sabotage substitute to calm Israeli authorities down. The second reason was the expertise of Israeli intelligence was needed to develop the worm. Knowing the Iranians used old and unreliable P1-Centrifuges from Pakistani nuclear black market chief AQ Khan, American developers secretly constructed replicas of the Natanz network in a laboratory in Tennessee and tested the worm until it worked well. In 2013, Edward Snowden confirmed in an interview published in Der Spiegel that the NSA and Israel co-wrote Stuxnet.Footnote 113

From technical knowledge to political effects

Tying one malware to broad political effects is neither advisable nor possible – clearly, there is no single cause and effect relationship to be found here. In addition, stable knowledge about the malware is sparse – and authorship for the malware has never been confirmed by any political actor. And yet, the creation and dissemination of the specific technical knowledge presented earlier (generated by standard cyber-security practices and through the logical of the three spaces) is the foundation for attempting to make attributions – and for taking political actions. It is also the basis for a widely accepted version of ‘the truth’ to emerge, which has changed the cyber-security discourse fundamentally.

The analysis of different claims about Stuxnet shows that fluid space has several effects. First, it triggers specialised actors (computer security experts) to re-establish routine and normalcy through dissecting the discovered malware, and by then updating anti-virus software, and by notifying the responsible actors about the vulnerability. This stabilises the networked space. At the same time, fluidity continued around the aspect of ‘attribution’ – in the Stuxnet case for quite a long time. Here, radical uncertainty about knowing the actual deeds and abilities of the enemy ‘Other’ emerges as political threat. In the case of Stuxnet, regional space was created several times through aggregated data. While it was less of a problem to identify the target of the malware (though questions of the exact effects remained unclear), establishing the origin with enough certainty was an elusive quest. And yet, a specific ‘truth claim’ did emerge in an attempt to stabilise this space by using regional space.Footnote 114

As a consequence of malware’s multifaceted performance, there are strong reactions geared towards fighting fluidity as main threat on a more general level. There is a rise of cyber-geopolitics, which builds on the regional performances of cyber-incidents to think control in terms of regions (and territory). Since disruptions to the stability of cyber-security (in the form of malware) materialise first and foremost as disruptions in machines that are situated on specific territories – and these disruptions are caused by objects with a geographical origin (though that origin is sometimes very hard to identify) – the imaginations of modern geography, with its Euclidian presuppositions and spatial anxieties, can easily be translated into the language of cyber-space. This way, malware is made geopolitically relevant and more easily linked to enemy ‘Others’. Malware, then, can be conceptualised as a weapon, aimed at a specific target.

Indeed, in military circles, cyberspace is depicted as a battlefield (or rather, battlespace) on which a covert war can be fought or, depending on the person’s belief, is already being fought.Footnote 115 Most references to a cyber(ed) battlefield are literal references to an actual (geographical) battlefield. Military terms like cyber-weapons, cyber-capabilities, cyber-offence, cyber-defence, and cyber-deterrence suggest that cyberspace can and should be handled as an operational domain of warfare like land, sea, air, and outer space, working under the same premises; and cyberspace has been officially recognised as a new domain in US military doctrine.Footnote 116

Ultimately, this type of politics is about the establishment of territoriality and borders in the virtual realm, about nationally owned space and a nationally definable space, based on physical infrastructures. The keeper of the peace in bordered cyberspace naturally is the military, coupled with the intelligence service (more specifically, cyber command units). This counter-modern image invokes the closured, safe cocoon of a delimited and thus defendable and securable place, newly reordered by the state as the sole real guarantor of security. The hope is that the emergence of a Westphalian cyber-order will bring back the certainty of the Cold War, so that ‘deterrence, wars, conflict, international norms, and security will make sense again as practical and historical guides to state actions and deliberations’.Footnote 117 Fluid space, therefore, is seized upon to forcefully re-establish regional space, with its desired stability, certainty, and security. Thus, even though stability is conceptualised as the only acceptable state of social space and fluidity is seen as the embodiment of a severe threat that disrupts the homogeneity of networks, and thus, of security, fluidity also fundamentally makes possible the re-establishment of regions and networks.

Conclusion

The main aim of this article was to propose a new theorisation of cyber-security through ANT concepts. One of the central insights of ANT is that material artefacts such as computers and software are active not passive entities. In this sense, cyber-security is both a process and an outcome of the topologies through and by which the threats to digital security are enacted. The argument was put forward in three steps: First, the article situated cyber-security research in the wider field of security studies. It showed that there is an insufficient grasp of how cyberspace defines itself in heterogeneous ways, and the impacts this has on cyber-security, as practice and in practice. Second, the article argued that cyber-threats (in the form of malware or cyber-incidents) should be investigated and addressed within the space(s) they enact, which can take three overlapping forms: networks, regions, and fluids. Third, in order to account for the consequences of this multi-spatiality, we examined the ‘setting-into-work’ of the politics of malware. This article has demonstrated that malware challenges the consistency of networks and the sovereign boundaries set by regions and at the same time re-enact them. Further, even when malware is seized upon to perform regions and networks, the ultimate goal of such social practices is the creation of stability (and predictability) of interactions. These mediators thus can be used to create certainty about the good and the bad, and are used to bring back order to a realm where there seems to be constant change. At the same time, cyber-incidents constantly threaten this order through the fluidity that they embody and the radical uncertainty that they bring. They emerge as reciprocally disturbing and constituting, causing multiple political effects.

The impact of the theoretical arguments discussed in this article exceeds and overflows the field of cyber-security. For instance, ANT redeems the role played by things in the production of a specific (social) order. In this perspective, humans and nonhumans are implicated in the generation of practical knowledge, and this can have important ramifications in fields as diverse as ‘nuclear’ weapons, ‘environmental’ security, or critical ‘infrastructures’. This article has argued that the contribution of humans and nonhumans to knowledge processes supports methodological symmetry, that is, methodological assumptions that apply to humans can be brought to bear on nonhumans. In part, this is what makes both humans and nonhumans actants, in the specific sense that they are ‘entities embedded in practice configurations whose interactions generate … knowledge’.Footnote 118 Taken seriously, this would mean that world politics is organised around three lines of forces – material, social, and semiotics – which constitute ‘technologies of knowledge’. For cyber-security, a focus on technical materialities and a practice-oriented view on the performance of malware promise to provide much needed explanations for how the politics of cyber-security work, with a specific focus on ‘attribution’. Nonetheless, additional empirical research is likely to shed further light on the role of cyber-incidents in shaping threat perceptions and ultimately policy responses. Moreover, it will enable researchers to focus more specifically on questions of power and knowledge in the field, insisting not only on already available knowledge, but also on knowledge-in-the-making. Finally, this change of emphasis could be relevant for security studies more generally. Specifically, an ANT-driven analysis, which brings human and nonhuman agencies into alliance, develops the importance of objects and materiality, and strengthens the focus on everyday practices, can help researchers understand links between (security) incidents of any kind and politics in different ways. In particular, the multiplicity of spaces performed by objects always leads to heterogeneous, even contradicting policy responses. Therefore, an ANT-approach makes it possible to deal with this as the norm, rather than having to choose a focus on one particular type of political intervention. In other words, the focus on objects and moments of depunctualisations of ‘the normal’ opens up possibilities for the study of security issues not easily accessible through textual enquiries.

Biographical information

Thierry Balzacq is the Scientific Director of the Institute for Strategic Research (IRSEM), the French Ministry of Defense’s research centre. He is also Tocqueville Professor of International Relations at the University of Namur in Belgium and Adjunct Professor at Sciences Po Paris. He holds a PhD from the University of Cambridge. He is a former Honorary Professorial Fellow at the University of Edinburgh, where he was also Fellow for ‘outstanding research’ at the Institute for Advanced Studies in the Humanities. In 2015, he was awarded a Tier 1 Canada Research Chair in Diplomacy and International Security – ‘Tier 1 Chairs are for outstanding researchers acknowledged by their peers as world leaders in their fields.’ His most recent articles have appeared in International Relations and International Studies Review. He is the co-editor (with Myriam Dunn Cavelty) of The Routledge Handbook of Security Studies, 2nd edition. His current research is on French Grand Strategy, rationality and IR, and the aestheticisation of violence.

Myriam Dunn Cavelty is Senior Lecturer and Deputy for Teaching and Research at the Centre for Security Studies, ETH Zurich, Switzerland. Her research focuses on the politics of risk and uncertainty in security politics and on changing conceptions of (inter)national security due to cyber issues. She is the author of Cyber-Security and Threat Politics: US Efforts to Secure the Information Age (Routledge, 2008) and co-editor among others of The Routledge Handbook of Security Studies, 1st and 2nd editions (Routledge, 2012 and 2016); Securing the Homeland: Critical Infrastructure, Risk, and (In)Security (Routledge, 2008); and Power and Security in the Information Age: Investigating the Role of the State in Cyberspace (Ashgate, 2007). Her works have appeared in various outlets, including Security Dialogue, International Political Sociology, and International Studies Review. In addition to her teaching, research and publishing activities, she advises governments, international institutions, and companies in the areas of cyber security, critical infrastructure protection, risk analysis, and strategic foresight.

Acknowledgements

Earlier versions of this article were presented at the Millennium Annual Conference (London, 2012), the International Relations Seminar at the University of Edinburgh (Edinburgh 2013), the APSA Annual Meeting (Washington DC, 2014), and the Theory Seminar at the Norwegian Institute of International Affairs (Oslo, 2014). We would like to express our gratitude to the organisers, participants, and discussants of these events – in particular Geoffrey Herrera, Xavier Guillaume, Aida A. Hozic, and Karsten Friis – for their insightful comments. We also want to thank the anonymous reviewers and Anne Harrington.

References

1 Healey, Jason (ed.), A Fierce Domain: Cyber Conflict 1986 to 2012 (Arlington: Cyber Conflict Studies Association, 2013)Google Scholar.

2 Brown, Kathi Ann, Critical Path: A Brief History of Critical Infrastructure Protection in the United States (Arlington: George Mason University Press, 2006), p. 51Google Scholar.

3 The literature is vast. For a start, see Shaviro, Steven, Connected, or What it Means to Live in the Network Society (Minneapolis: University of Minnesota Press, 2003)Google Scholar; Castells, Manuel, The Rise of the Network Society (Oxford: Blackwell, 1996)Google Scholar; Stalder, Felix, Manuel Castells and the Theory of the Network Society (Cambridge: Polity Press, 2006)Google Scholar.

4 Bingham, Nick, ‘Objections: From technological determinism towards geographies of relations’, Environment and Planning D: Society and Space, 14:6 (1996), p. 32CrossRefGoogle Scholar.

5 Graham, Stephen, ‘The end of geography or the explosion of place? Conceptualizing space, place and information technology’, Progress in Human Geography, 22:2 (1998), p. 178CrossRefGoogle Scholar.

6 Bueger, Christian and Bethke, Felix, ‘Actor-networking the failed state: an enquiry into the life of concepts’, Journal of International Relations and Development, 17:1 (2014), p. 34CrossRefGoogle Scholar.

7 Law, John, ‘Actor network theory and material semiotics’, in B. S. Turner (ed.), The New Blackwell Companion to Social Theory (Oxford: Blackwell, 2009), p. 145fGoogle Scholar.

8 Connolly, William E., ‘The “new materialism” and the fragility of things’, Millennium – Journal of International Studies, 41:3 (2013), pp. 399412CrossRefGoogle Scholar.

9 Mukerji, Chandra, ‘The material turn’, Emerging Trends in the Social and Behavioural Sciences: An Interdisciplinary, Searchable, and Linkable Resource (2015), pp. 113Google Scholar.

10 Schatzki, Theodore R., Knorr-Cetina, Karin, and von Savigny, Eike (eds), The Practice Turn in Contemporary Theory (London: Routledge, 2001)Google Scholar.

11 See Law, John, ‘Objects and spaces’, Theory, Culture & Society, 19:5/6 (2002), pp. 91105CrossRefGoogle Scholar; Mol, Annemarie and Law, John, ‘Regions, networks and fluids: Anaemia and social topology’, Social Studies of Science, 24 (1994), pp. 641671CrossRefGoogle ScholarPubMed; Law, John and Mol, Annemarie, ‘On metrics and fluids: Notes on otherness’, in Robert Chia (ed.), Organized Worlds: Explorations in Technology, Organization and Modernity (London: Routledge, 1998), pp. 2038Google Scholar; Law, John and Singleton, Vicky, ‘Object lessons’, Organization, 12:2 (2005), pp. 331355CrossRefGoogle Scholar.

12 See, for example, Gombert, David C. and Libicki, Martin, ‘Cyber warfare and sino-American crisis instability’, Survival: Global Politics and Strategy, 56:4 (2014), pp. 722CrossRefGoogle Scholar; Denning, Dorothy E., ‘Activism, hacktivism, and cyberterrorism: the Internet as a tool for influencing foreign policy’, in J. Arquilla and D. F. Ronfeldt (eds), Networks and Netwars: The Future of Terror, Crime, and Militancy (2001), pp. 239288Google Scholar.

13 It is noteworthy, however, that a few cyber-security related articles have been published in high-ranking political science journals recently: Gartzke, Erik, ‘The myth of cyberwar: Bringing war in cyberspace back down to Earth’, International Security, 38:2 (2013), pp. 4173CrossRefGoogle Scholar or Valeriano, Brandon G. and Maness, Ryan, ‘The dynamics of cyber conflict between rival antagonists, 2001–11’, Journal of Peace Research, 51:3 (2014), pp. 347360CrossRefGoogle Scholar.

14 Deibert, Cf. Ronald, ‘Black code: Censorship, surveillance, and the militarisation of cyberspace’, Millennium: Journal of International Studies, 32:3 (2003), pp. 501530CrossRefGoogle Scholar; Deibert, Ronald and Rohozinski, Rafal, ‘Risking security: Policies and paradoxes of cyberspace security’, International Political Sociology, 4:1 (2010), pp. 1532CrossRefGoogle Scholar.

15 Eriksson, Johan, ‘Cyberplagues, IT, and security: Threat politics in the information age’, Journal of Contingencies and Crisis Management, 9:4 (2001), pp. 211222CrossRefGoogle Scholar; Cavelty, Myriam Dunn, Cyber-Security and Threat Politics: US Efforts to Secure the Information Age (London: Routledge, 2008)Google Scholar; Hansen, Lene and Nissenbaum, Helen, ‘Digital disaster, cyber security, and the Copenhagen School’, International Studies Quarterly, 53 (2009), pp. 11551175CrossRefGoogle Scholar; Lawson, Sean, ‘Beyond cyber-doom: Assessing the limits of hypothetical scenarios in the framing of cyber-threats’, Journal of Information Technology & Politics, 10:1 (2013), pp. 86103CrossRefGoogle Scholar.

16 Barnard-Wills, David and Ashenden, Debi, ‘Securing virtual space: Cyber war, cyber terror, and risk’, Space and Culture, 15:2 (2012), pp. 110112CrossRefGoogle Scholar; Stevens, Tim and Betz, David J., ‘Analogical reasoning and cyber security’, Security Dialogue, 44:2 (2013), pp. 147164Google Scholar; Cavelty, Myriam Dunn, ‘From cyber-bombs to political fallout: Threat representations with an impact in the cyber-security discourse’, International Studies Review, 15:1 (2013), pp. 105122CrossRefGoogle Scholar.

17 Balzacq, Thierry, ‘The three faces of securitization: Political agency, audience and context’, European Journal of International Relations, 11:2 (2005), pp. 171201CrossRefGoogle Scholar; Léonard, Sarah and Kaunert, Christian, ‘Reconceptualizing the audience in securitization theory’, in Thierry Balzacq (ed.), Securitization Theory: How Security Problems Emerge and Dissolve (London: Routledge, 2011), pp. 5776Google Scholar.

18 Vuori, Juha A., ‘Illocutionary logic and strands of securitization: Applying the theory of securitization to the study of non-democratic political orders’, European Journal of International Relations, 14:1 (2008), pp. 6599CrossRefGoogle Scholar.

19 Huysmans, Jef, ‘What’s in an act? On security speech acts and little security nothings’, Security Dialogue, 42:4–5 (2011), p. 371CrossRefGoogle Scholar.

20 Hansen, Lene, Security as Practice: Discourse Analysis and the Bosnian War (London: Routledge, 2006), p. 64Google Scholar.

21 For an exception from a different discipline, see Parikka, Jussi, Digital Contagions – A Media Archaeology of Computer Viruses (New York: Peter Lang Publishing, 2007)Google Scholar.

22 See, for example, Thomas Rid, Cyber War Will Not Take Place (London: Hurst & Company, 2013).

23 May, Chris et al., ‘Advanced Information Assurance Handbook’, CERT®/CC Training and Education Center (Pittsburgh: Carnegie Mellon University, 2004)Google Scholar.

24 To name a few: Bueger, Christian and Gadinger, Frank, ‘Reassembling and dissecting: International Relations practice from a science studies perspective’, International Studies Perspectives, 8:1 (2007), pp. 90110CrossRefGoogle Scholar; Best, Jacqueline and Walters, William, ‘Translating the sociology of translation’, International Political Sociology, 7:3 (2013), pp. 345349CrossRefGoogle Scholar; Bueger, Christian, ‘Actor-Network Theory, methodology, and international organization’, International Political Sociology, 7:3 (2013), pp. 338342CrossRefGoogle Scholar; Nexon, Daniel H. and Pouliot, Vincent, ‘Things of networks: Situating ANT in International Relations’, International Political Sociology, 7:3 (2013), pp. 342345CrossRefGoogle Scholar. A paper that raises the different challenges that come with adopting ANT in IR is Barry, Andrew, ‘Translation zone: Between Actor-Network Theory and International Relations’, Millennium: Journal of International Studies, 41:3 (2013), pp. 413429CrossRefGoogle Scholar.

25 Aradau, Claudia, ‘Security that matters: Critical infrastructure and objects of protection’, Security Dialogue, 41:5 (2010), pp. 491514CrossRefGoogle Scholar; Jackson, Patrick T. and Nexon, Daniel H., ‘Relations before states: Substance, process, and the study of world politics’, European Journal of International Relations, 5:3 (1999), pp. 291332CrossRefGoogle Scholar.

26 In this article, we stress the deliberate element to differentiate them from ‘failures’ and ‘accidents’. This is justified by the fact that ‘attacks’, potentially damaging events orchestrated by a human adversary, are the sole focus of the current cyber-security discourse.

27 Turner, Raymond, ‘Understanding programming language’, Mind and Machines, 17:2 (2007), pp. 129133CrossRefGoogle Scholar; Strachey, Christopher, ‘Fundamental concepts in programming languages’, Higher Order and Symbolic Computations, 13 (2000), pp. 1149CrossRefGoogle Scholar.

28 Parikka, Jussi, ‘Ethologies of software art: What can a digital body of code do?’, in Stephen Zepke (ed.), Deleuze and Contemporary Art (Edinburgh: Edinburgh University Press, 2010), p. 118Google Scholar.

29 Ibid., p. 119.

30 Arns, Inke, ‘Code as performative speech act’, Artnodes (2005), p, 7, available at: {www.uoc.edu/artnodes/eng/arns0505.pdf}Google Scholar accessed 23 August 2014.

31 Parikka, ‘Ethologies of software art’, p. 125.

32 Skibell, Reid, ‘The myth of the computer hacker’, Information, Communication & Society, 5:3 (2002), pp. 336356CrossRefGoogle Scholar.

33 Malware comes in different shapes and categories: The best-known form is probably the computer virus, but there are others such as worms, Trojan horses, spyware, etc., predefined by how they spread through the information environment and/or by their purpose. To be able to categorise malware, one needs to understand how it functions – which is either done through observation of its performance or through so-called reverse engineering. See Skoudis, Ed and Zeltser, Lenny, Malware: Fighting Malicious Code (Upper Saddle River: Prentice Hall, 2004)Google Scholar.

34 See, for a general discussion of this aspect, Cohen, Fred, ‘Computer viruses – theory and experiments’, Computers and Security, 6:1 (1987), pp. 2235CrossRefGoogle Scholar; Bontchev, Vesselin, ‘Are “good” computer viruses still a bad idea?’, Proceedings of the EICAR ’94 Conference (1994), pp. 2547Google Scholar.

35 Anderson, Ross, ‘Why information security is hard – an economic perspective’, in IEEE Computer Society (ed.), Proceedings of the 17th Annual Computer Security Applications Conference (Washington, DC: IEEE Computer Society, 2001), pp. 358365Google Scholar.

36 Moore, Tyler, ‘Introducing the economics of cybersecurity: Principles and policy options’, in National Academies of Sciences (ed.), Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy (Boston: National Academies Press, 2010)Google Scholar; Simonite, Tom, Welcome to the Malware-Industrial Complex (Boston: MIT Technology Review, 2013)Google Scholar.

37 Preda, Alex, ‘The turn to things: Arguments for a sociological theory of things’, The Sociological Quarterly, 40:2 (1999), p. 357CrossRefGoogle Scholar; Latour, Bruno, ‘Pragmatogonies: a mythical account of how humans and non-humans swap properties’, American Behavioral Scientist, 37:6 (1994), pp. 791808CrossRefGoogle Scholar; Latour, Bruno, We Have Never Been Modern (London: Harvester Wheatsheaf, 1993)Google Scholar; Doyle, E. McCarthy, ‘Toward a sociology of the physical world: George Herbert Mead on physical objects’, Studies in Symbolic Interaction, 5 (1984), pp. 105121Google Scholar.

38 Law, John, ‘Notes of the theory of actor-network: Ordering, strategy, and heterogeneity’, Systems Practice, 5:4 (1992), p. 383CrossRefGoogle Scholar.

39 For reasons of space, we will not deal with other actants here. But see Gershon, Ilana, ‘Bruno Latour’, in Jon Simons (ed.), Agamben to Zizek: Contemporary Critical Theorists (Edinburgh: Edinburgh University Press, 2010)Google Scholar, for a discussion of different types of actants.

40 Latour, Bruno, Reassembling the Social: An Introduction to Actor-Network Theory (Oxford: Oxford University Press, 2005), p. 128Google Scholar.

41 Callon, Michel, ‘Techno-economic networks and irreversibility’, in John Law (ed.), A Sociology of Monsters: Essays on Power, Technology and Domination, Sociological Review Monograph, 38 (New York: Routledge, 1991), p. 153Google Scholar

42 Latour, Bruno, Pandora's Hope: Essays on the Reality of Science Studies (Cambridge: Harvard University Press, 1999)Google Scholar.

43 Best and Walters, ‘Translating the sociology of translation’, p. 346.

44 Yet, not all ANT research has been based on ethnography and it has not been exclusively committed to fieldwork. For instance, Law’s work on Portuguese vessels and international control has relied upon a detailed historical reconstruction. Law, John, ‘On the methods of long distance control: Vessels, navigation and the Portuguese route to India’, in J. Law (ed.), Power, Action and Belief: A New Sociology of Knowledge? (London: Routledge & Kegan Paul, 1986), pp. 234263Google Scholar or Law, John, Organizing Modernity (Oxford: Blackwell, 1994)Google Scholar.

45 Vrasti, Wanda, ‘The strange case of ethnography and International Relations’, Millennium: Journal of International Studies, 37:2 (2008), pp. 279301CrossRefGoogle Scholar; Nimmo, Richie, ‘Actor-Network Theory and methodology: Social research in a more-than-human world’, Methodological Innovations Online, 6:3 (2011), pp. 108119CrossRefGoogle Scholar.

46 Best and Walters, ‘Translating the sociology of translation’, p. 346.

47 Law, ‘Objects and spaces’, p. 96.

48 Latour, Bruno, The Pasteurization of France (Cambridge: Harvard University Press, 1988), pp. 344Google Scholar.

49 Law and Singleton, ‘Object lessons’; Law and Mol, ‘On metrics and fluids’.

50 Mol and Law, ‘Regions, networks and fluids’, p. 643.

51 Walker, R. B. J., Inside/Outside: International Relations as Political Theory (Cambridge: Cambridge University Press, 1992)CrossRefGoogle Scholar.

52 Arquilla, John and Ronfeldt, David F., The Advent of Netwar (Santa Monica: RAND, 1996)Google Scholar.

53 Parikka, Jussi, ‘Politics of swarms: Translation between entomology and biopolitics’, Parallax, 14 (1996), pp. 112124CrossRefGoogle Scholar.

54 Law, ‘Objects and spaces’, p. 95.

55 Mol and Law, ‘Regions, networks and fluids’, p. 649.

56 Law, John, ‘Actor Network Theory and Material Semiotics’ (April 2007), p. 8, available at: {http://www. heterogeneities.net/publications/Law2007ANTandMaterialSemiotics.pdf}Google Scholar accessed August 2013.

57 Harvey, David, Justice, Nature and the Geography of Difference (Oxford: Blackwell, 1996), pp. 260261Google Scholar.

58 Bloomfield, Brian P. and Vurdubakis, Theo. ‘The outer limits: Monsters, actor networks and the writing of displacement’, Organization, 6:4 (1999), p. 626CrossRefGoogle Scholar, emphasis in original.

59 Mol and Law, ‘Regions, networks and fluids’, p. 660.

60 Ibid., p. 652.

61 Ibid., p. 661.

62 Ibid., p. 662.

63 Law, ‘Objects and spaces’, p. 102.

64 Suteanu, Cristian, ‘Complexity, science and the public: the geography of a new interpretation’, Theory, Culture & Society, 22:5 (2005), p. 130CrossRefGoogle Scholar.

65 Szor, Peter, The Art of Computer Virus Research and Defense (Boston: Addison-Wesley, 2005)Google Scholar.

66 Microsoft, Understanding Anti-Malware Technologies (White Paper, 2007); McAfee, New Gateway Anti-Malware Technology Sets the Bar for Web Threat Protection (White Paper, 2013).

67 Firewalls, another very common defensive mechanism, work similarly but will not be discussed in more detail here.

68 Microsoft, ‘Microsoft Security Intelligence Report Website’, available at: {www.microsoft.com/security/sir/default.aspx}; Symantec, ‘Annual Threat Report’, available at:{www.symantec.com/threatreport}; Sophos, ‘Security Threat Report’, available at:{www.sophos.com/en-us/security-news-trends/reports/security-threat-report.aspx}, etc.

70 See the Microsoft Security Blog on ‘Lessons from Least Infected Countries’, available at: {blogs.technet.com/b/security/p/series-lessons-from-least-infected-countries.aspx}.

71 The Google Transparency Report now also includes sources of malware, available at: {blogspot.ch/2013/06/transparency-report-making-web-safer.html}.

72 Bhattacharjee, Yudhijit, ‘How a remote town in Romania has become cybercrime central’, Wired (2011), available at: {www.wired.com/magazine/2011/01/ff_hackerville_romania/}Google Scholar accessed 31 January 2011; Kshetri, Nir, ‘Cybercrimes in the former Soviet Union and Central and Eastern Europe: Current status and key drivers’, Crime, Law and Social Change, 60:1 (2013), pp. 3965CrossRefGoogle Scholar.

73 Segal, Adam, ‘Chinese computer games – keeping safe in cyberspace’, Foreign Affairs, 3:4 (2012), pp. 1420Google Scholar.

74 Mol and Law, ‘Regions, networks and fluids’, p. 649.

75 That means all the computers infected with Code Red tried to contact the White House website at the same time, overloading the machines and making it become unavailable. See Dolak, John C., ‘The Code Red worm’, Security Essentials, 1:2 (SANS Institute, 2001)Google Scholar.

76 Tiirmaa-Klaar, Heli et al., Botnets, Springer Briefs in Cybersecurity (New York: Springer, 2013)Google Scholar.

77 See, for example, the ISO 27000 series, available at: {www.27000.org} or the OECD Guidelines for the Security of Information Systems and Networks.

78 European Union Agency for Network and Information Security, ‘Inventory of Risk Management / Risk Assessment Methods and Tools’, available at: {www.enisa.europa.eu/activities/risk-management/current risk/risk-management-inventory}.

79 United Nations Office on Drugs and Crime, ‘Comprehensive Study on Cybercrime’ (Vienna: UNODC, 2013) available at: { www.globalinitiative.net/wpfb-file/unodc-comprehensive-study-on-cybercrime-pdf/}.

80 European Commission, Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace (Brussels: JOIN, 2013)Google Scholar; European Commission, Proposal for a Directive of the European Parliament and of the Council concerning measures to ensure a high common level of network and information security across the Union, 2013/0027 (COD).

81 See, for example, Schmitt, Michael N. (ed.), Tallinn Manual on the International Law Applicable to Cyber Warfare (New York: Cambridge University Press, 2013)CrossRefGoogle Scholar.

82 Parikka, ‘Ethologies of software art’, p. 125.

83 Spafford, Eugene H., ‘The Internet worm: Crisis and aftermath’, Communications of the ACM, 32:6 (1989), pp. 678687CrossRefGoogle Scholar.

84 Parikka, Jussi, ‘Contagion and repetition: On the viral logic of network culture’, Ephemera, 7:2 (2007), pp. 287308Google Scholar.

85 Clark, David D. and Landau, Susan, ‘Untangling attribution’, National Academies of Sciences, Proceedings of a Workshop on Deterring Cyber Attacks: Informing Strategies and Developing Options for U.S. Policy (Washington: National Academies Press, 2010), pp. 2540Google Scholar.

86 Deibert, Ronald and Rohozinski, Rafal, ‘Tracking GhostNet: Investigating a cyber-espionage network’, Information Warfare Monitor, available at: {www.infowar-monitor.net/2009/09/tracking-ghostnet-investigating-a-cyber-espionage-network/}Google Scholar.

87 Sommer, Peter and Brown, Ian, ‘Reducing systemic cyber security risk’, Report of the International Futures Project (Paris: OECD, 2011)Google Scholar; Robinson, Neil, Horvath, Veronica, Cave, Jonathan, Roosendaal, Arnold, and Klaver, Marieke, Data and Security Breaches and Cyber-Security Strategies in the EU and its International Counterparts (Strasbourg: Europa Parliament, Committee on Industry, 2013), p. 58Google Scholar.

88 Law, ‘Objects and spaces’, p. 102.

89 Gross, M. J., ‘Stuxnet worm: a declaration of cyber-war’, Vanity Fair, 4 (2011)Google Scholar.

90 Sanger, David, ‘Obama order sped up wave of cyberattacks against Iran’, New York Times, available at: {www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html}Google Scholar accessed 1 June 2012.

91 Symantec, ‘Security Response’, available at: {www.symantec.com/connect/blogs/w32stuxnet-dossier}.

92 See, for example, Langner, Ralph, ‘Stuxnet: Dissecting a cyberwarfare weapon’, Security & Privacy, IEEE, 9:3 (2011), pp. 4951CrossRefGoogle Scholar; Chen, T. M., and Abu-Nimeh, S., ‘Lessons from Stuxnet’, Computer, 44:4 (2011), pp. 9193CrossRefGoogle Scholar.

93 See, for example, James, Farwell and Rohozinski, Rafal, ‘Stuxnet and the future of cyber-war’, Survival, Global Politics and Strategy, 53:1 (2011), pp. 2340Google Scholar; Collins, Sean and McCombie, Stephen, ‘Stuxnet: the emergence of a new cyber weapon and its implications’, Journal of Policing, Intelligence and Counter Terrorism, 7:1 (2012), pp. 8091CrossRefGoogle Scholar; Lindsay, Jon R., ‘Stuxnet and the limits of cyber warfare’, Security Studies, 22:3 (2013), pp. 365404CrossRefGoogle Scholar.

94 Kapersky, Eugene, ‘The man who found Stuxnet – Sergey Ulasen in the spotlight’, Eugene Kapersky Official Blog, available at: { www.eugene.kaspersky.com/2011/11/02/the-man-who-found-stuxnet-sergey-ulasen-in-the-spotlight/}Google Scholar.

95 Keiser, Gregg, ‘Why did Stuxnet worm spread’, Computerworld, available at: {www.computerworld.com/article/2516109/security0/why-did-stuxnet-worm-spread-.html}Google Scholar accessed 1 October 2014.

96 KrebsonSecurity, ‘Experts warn of new windows shortcut flaw’, available at: {www.krebsonsecurity.com/2010/07/experts-warn-of-new-windows-shortcut-flaw}; ‘Microsoft to issue emergency patch for critical windows-bug’, available at: {www. krebsonsecurity.com/2010/07/microsoft-to-issue-emergency-patch-for-critical-windows-bug}.

97 Wilders security, ‘Rootkit’, available at: {www.wilderssecurity.com/threads/rootkit-tmphider.276994/#post-1712134}.

98 A Command and Control server (C&C server) is the centralised computer that issues commands to infected computers.

99 Symantec, ‘W32.Stuxnet Dossier’, available at: {www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier.pdf}; See also Ginter, Andrew, ‘The Stuxnet worm and options for remediation’, Industrial Ethernet Book Issue, 61:35 (2010)Google Scholar, available at: {www.iebmedia.com/index.php?id=7409&parentid=63&themeid=255&hft=61&showdetail=true&bb=1}.

100 Langner, ‘Stuxnet logbook sep 16 2010 1200 hoursmesz’, available at: {www.langner.com/en/2010/09/16/stuxnet-logbook-sep-16-2010-1200-hours-mesz/}.

101 Schneier, Bruce, ‘Stuxnet’, Schneier on Security, available at: {www.schneier.com/blog/archives/2010/10/stuxnet.html}Google Scholar.

102 BBC, ‘Iran fends off new Stuxnet cyber attack’, BBC News, available at: {www.bbc.com/news/world middle-east-20842113} accessed on 22 April 2015.

103 Albright, David, Brannan, Paul, and Walrond, Christina, ‘Did Stuxnet take out 1,000 centrifuges at the Natanz Enrichment Plant? Preliminary assessment’, Institute for Science and International Security, available at: {www.isis-online.org/isis-reports/detail/did-stuxnet-take-out-1000-centrifuges-at-the-natanz-enrichment-plant/}Google Scholar accessed 22 December 2014.

104 Markoff, John and Sanger, David E., ‘In a computer worm, a possible Biblical clue’, New York Times, available at: {www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=0}Google Scholar 30 October 2010.

105 Ibid.

106 Rivva, ‘Computer-virus Stuxnet trifft deutsche industrie’, Süddeutsche Zeitung, available at: {www.sueddeutsche.de/digital/gefaehrliches-schadprogramm-computer-virus-stuxnet-trifft-deutsche-industrie-1.1007379} accessed 2 October 2014.

107 Zeiter, Kim, ‘Stuxnet timeline shows correlation among events’, Wired, available at: {www.wired.com/2011/07/stuxnet-timeline/}Google Scholar.

108 Car, Jeffrey, ‘Dragons, tigers, pearls, and yellowcake: 4 Stuxnet targeting scenarios’, Forbes, available at: {www.forbes.com/sites/firewall/2010/11/22/dragons-tigers-pearls-and-yellowcake-4-stuxnet-targeting-scenarios/}Google Scholar accessed 22 November 2014; Jeffrey Car, ‘Stuxnet's Finnish-Chinese connection’, Forbes, available at: {www.forbes.com/sites/firewall/2010/12/14/stuxnets-finnish-chinese-connection/} accessed 14 December 2014.

109 Broad, William J., Markoff, John, and Sanger, David E., ‘Israeli test on worm called crucial in Iran nuclear delay’, New York Times, available at: {www.nytimes.com/2011/01/16/world/middleeast/16stuxnet.html?pagewanted=all&_r=0}Google Scholar accessed 16 January 2014.

110 Melmann, Yossi, ‘Israel finally moving to define national policy on Iran’, Haaretz, available at: {www.haaretz.com/print-edition/features/israel-finally-moving-to-define-national-policy-on-iran-1.348250}Google Scholar accessed 10 March 2014.

111 Sanger, ‘Obama order sped up wave of cyberattacks against Iran’.

112 Perlroth, Nicole, ‘Researchers find clues in malware’, New York Times, available at: {www.nytimes.com/2012/05/31/technology/researchers-link-flame-virus-to-stuxnet-and-duqu.html}Google Scholar accessed 30 May 2014.

113 Appelbaum, Jacob and Poitras, Laura, ‘Edward Snowden interview: the NSA and its willing helpers’, Spiegel, available at: {www.spiegel.de/international/world/interview-with-whistleblower-edward-snowden-on-global-spying-a-910006.html}Google Scholar accessed 9 July 2013.

114 An interesting (and underexplored) question is why this specific version (Stuxnet as a targeted attack against Iran, launched by the US and Israel) became the truth way before The New York Times article provided more ‘evidence’, even though many alternative explanations, from highly-respected security specialists, existed.

115 Clarke, Richard, Cyber War: The Next Threat to National Security and What to Do About It (New York: Ecco, 2010)Google Scholar.

116 Lynn, William J., ‘Defending a new domain: the Pentagon’s cyberstrategy’, Foreign Affairs (Sept./Oct. 2010), pp. 97108Google Scholar.

117 Demchak, Chris C. and Dombrowski, Peter, ‘Rise of a cybered Westphalian age’, Strategic Studies Quarterly, 3 (2011), pp. 3261Google Scholar.

118 Preda, Alex, ‘The turn to things: Arguments for a sociological theory of things’, The Sociological Quarterly, 40:2 (1999), p. 357CrossRefGoogle Scholar.