Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T12:41:54.859Z Has data issue: false hasContentIssue false

#SorryNotSorry: Why states neither confirm nor deny responsibility for cyber operations

Published online by Cambridge University Press:  20 August 2021

Joseph M. Brown*
Affiliation:
Department of Political Science, University of Massachusetts Boston, Boston, United States
Tanisha M. Fazal
Affiliation:
Department of Political Science, University of Minnesota, Minneapolis, United States
*
*Corresponding author. Email: joseph.brown@umb.edu
Rights & Permissions [Opens in a new window]

Abstract

States accused of perpetrating cyber operations typically do not confirm or deny responsibility. They issue ‘non-denial denials’ or refuse to comment on the accusations. These ambiguous signals are prevalent, but they are largely ignored in the existing cyber literature, which tends to treat credit claiming as a binary choice. The ambiguity of non-denial denials and ‘non-comments’ allows states to accomplish two seemingly opposed goals: maintaining crisis stability and leaving open the possibility of their involvement in the attack. By deliberately remaining a suspect, a state can manipulate rivals’ perceptions of its cyber capability and resolve. Refusing to deny responsibility can also shape rivals’ perceptions of allies’ capabilities, enhancing the credibility of deterrence. All of this can be accomplished without the escalatory risks that would come with an explicit admission of responsibility. Where previous research has focused on the dangers of escalation and the limitations of costly signalling with cyber, we show that non-denial denials and non-comments make cyber operations considerably more useful than the literature appreciates.

Type
Research Article
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of the British International Studies Association

Introduction

As they transform the nature of warfare and diplomacy, cyber operations pose a challenge to existing theories of conflict and coercion.Footnote 1 The challenge arises from the ambiguities inherent in ‘cyber’. By blurring the lines between war and peace, combat and espionage, state and non-state, cyber creates opportunities for malfeasance and manipulation short of open war. Cyber also raises an ‘attribution problem’, allowing eavesdroppers and attackers to conceal their identities when such obfuscation is politically useful.Footnote 2 The potential for ambiguity and elision raises questions about the feasibility of applying key insights from the conventional conflict literature to the cyber realm. Two areas of particular interest are covert action and costly signalling. States conceal operations to increase their chances of success – and to decrease the risk of crisis escalation.Footnote 3 At other times, they flaunt their capabilities to signal strength and resolve, increasing the credibility of their coercive threats.Footnote 4 In the conventional (that is, physical) realm, the distinction between covert and overt is relatively straightforward. Costly signalling may be as simple as parading missiles down the street or staging naval exercises.Footnote 5 Things are less straightforward in cyber, where capabilities consist of non-physical computer code, which remains unseen until used in vivo against an opponent. Scholars nonetheless argue that states can develop a ‘reputation for cyber power’ by claiming their operations after the fact.Footnote 6 Revealing their identities clarifies states’ capabilities and the level of military and diplomatic cost they are willing to incur to carry out an attack, thus allowing attacks to function as costly signals.Footnote 7 In practice, however, we see very few instances of states openly claiming credit for cyber operations – even operations that have already succeeded in their military purposes. This is something of a puzzle. Scholars explain the anomaly by noting the escalatory risks of public credit claiming and the feasibility of private credit claims issued directly to the target via diplomatic or technical means.Footnote 8 States can avoid ‘war fever’ and generate credibility for coercive threats by denying responsibility in public and admitting responsibility in private – so the reasoning goes.

We argue that this theoretical account of covert action and signalling misses a key empirical fact of cyber diplomacy: public credit claiming is not a binary choice. Although it is true that states rarely claim responsibility in public, they tend not to deny responsibility outright when accused. Instead, they issue ambiguous signals: ‘non-denial denials’ (deflecting allegations without refuting them) and ‘non-comments’ (refusing to dignify allegations with a reply).Footnote 9 These strategies of ambiguity are not addressed by existing theories, which treat public credit claiming as an either/or proposition. Yet ambiguous signals are considerably more common than clear ones: a random sample of the 164 cyberattacks identified by Brendan Valeriano and Ryan C. Maness in their database shows that nearly all of the explicit responses to accusations of involvement are non-denial denials and the modal response is silence.Footnote 10

We explore non-denial denials and non-comments at length, adding empirical detail and theoretical nuance to existing accounts of cyber credit claiming.Footnote 11 We argue that rhetorical ambiguity offers a way around the limitations of cyber operations identified in earlier studies. By neither confirming nor denying responsibility in public, the accused perpetrator of a cyber operation avoids the political fallout or escalation that would ensue from an explicit admission of guilt. At the same time, the accused perpetrator can develop a reputation for coercive cyber capability and resolve. Rival states (not just the victim) observe the accused's conspicuous public refusal to confirm or deny responsibility. Rivals are forced to update their perceptions of the accused's capability (if they have not yet determined the attacker's identity forensically) and they must update their perceptions of the accused attacker's willingness to pay reputational costs – that is, by remaining a suspect. Crucially, this second aspect of public communication (assuming the opprobrium of potential guilt) allows cyber operations to function as costly signals, regardless of any behind-the-scenes forensic attribution or diplomatic communication between the accused and accuser. Thus, we may see non-denial denials or non-comments in cases where the perpetrator originally intended the attack to remain secret (that is, attacks that are accidentally revealed) or in cases where the perpetrator ‘tips its hand’ by including computer code or other markers that suggest potential involvement. In the latter case, the non-denial denial or non-comment is useful in influencing the victim state's perceptions of the attacker's resolve, and the perceptions of states other than the immediate victim, who may be uncertain about the attacker's capabilities as well as its resolve.

Our analysis thus highlights the potential upsides of cyber operations in coercion, not just the limitations and risks highlighted in earlier research.Footnote 12 We also call attention to the utility of ambiguous signals in alliance settings, as older institutions such as NATO attempt to establish a role in cyber conflict and deterrence.Footnote 13 A state that refuses to confirm or deny responsibility for an operation forces rivals to consider the possibility that it or its alliance partners might have carried out the attack. If allies refuse to confirm or deny responsibility, the weakest alliance partners can be made to look more capable, enhancing the credibility of the alliance's deterrent. Although states are not in full control of whether their cyber operations are disclosed, refusing to confirm or deny responsibility allows an alleged perpetrator to mitigate the consequences of disclosure, while gaining positive utility from remaining a suspect.

In what follows, we discuss empirical examples of non-denial denials and non-comments by states accused of perpetrating cyber operations. We use these examples inductively, developing the concept of the non-denial denial/non-comment as a distinct middle category between the extremes of admitting or denying responsibility. We also develop some intuitions about the logic motivating non-denial denials and non-comments in our sample cases. We identify two primary strategic functions served by refusing to confirm or deny responsibility: maintaining crisis stability and manipulating perceptions of cyber capability and resolve. We conclude with a discussion of how to apply these inductively derived insights to new cases as part of a testable framework. One empirical challenge is particularly important from a hypothesis-testing perspective: unless we know, ex ante, which revelations of cyber operations are intentional, it is extraordinarily difficult to predict which incidents will precipitate a ‘neither confirm nor deny’ response by the accused party.

Two illustrative examples: Russian and Chinese cyber operations

The empirical reality of non-denial denials and non-comments is apparent in two recent controversies. The first concerns Russia's interference in the 2016 United States presidential election. Following the 2016 election, the US government accused Russia of collaborating with Wikileaks to steal and release internal emails from the Democratic National Committee (DNC).Footnote 14 The US also voiced concerns regarding potential hacking of electronic voting systems.Footnote 15 The Russian government responded to these accusations with a variety of public signals, many falling short of the clear denial of involvement one might expect, given the gravity of the allegations. Speaking with journalist Christiane Amanpour, Russian Foreign Minister Sergei Lavrov explained that the Russian government ‘did not deny’ carrying out the DNC cyberattack, but the US government ‘did not prove it’ either.Footnote 16 When asked about his government's alleged hacking of DNC email servers, Russian President Vladimir Putin turned the question back on reporters: ‘[D]oes it even matter who hacked this data? … The important thing is the content that was given to the public.’Footnote 17 Putin later speculated that Russian hackers might have carried out the attack on their own initiative as ‘free people, like artists’ who ‘got up today … feeling patriotic’.Footnote 18 At other times, Putin has both confirmed and denied the Russian government's involvement in the hack. Asked at a March 2017 press conference whether Russia meddled in the 2016 elections, Putin replied: ‘Read my lips. No.’Footnote 19 But at a July 2018 news conference in Helsinki, he reversed himself. A reporter asked Putin: ‘[D]id you want President Trump to win the election and did you direct any of your officials to help him do that?’ Putin replied: ‘Yes, I did. Because he talked about bringing the US/Russia relationship back to normal.’Footnote 20

A second example comes from the fraught relationship between China and India. In October 2020, several months after a deadly border skirmish between the two countries, the Indian city of Mumbai suffered an unexpected power blackout.Footnote 21 Indian officials reportedly attributed the outage to an ‘infusion of malware at the Padgha-based state load dispatch center’ – allegations that were corroborated by US cyber security firms and reported in The New York Times in February 2021.Footnote 22 Asked to comment on the New York Times story, Chinese foreign ministry spokesman Wang Wenbin did not directly refute the hacking allegations. Instead, he expressed indignation that India would make such claims without hard evidence:

As a staunch defender of cyber security, China firmly opposes and cracks down on all forms of cyberattacks. Speculation and fabrication have no role to play on the issue of cyberattacks, as it is very difficult to trace the origin of a cyberattack. It is highly irresponsible to accuse a particular party when there is no sufficient evidence around. China is firmly opposed to such irresponsible and ill-intentioned practice.Footnote 23

These examples show that states have several public credit-claiming strategies available to them, not simply a choice between admitting responsibility or denying it. Sometimes ambiguous signals are issued with a wink. Leaders accused of cyber malfeasance may imply that they were not responsible, ridicule the accuser, but not refute the allegation outright. (Another strategy is to refuse to comment on the matter.) The Chinese foreign minister's response to Indian hacking allegations began with a boilerplate condemnation of hacking in general and then pivoted to attack India without addressing the allegations themselves. Russia's response to allegations of election meddling showed elements of acceptance, denial, and non-denial denial. Indeed, states can take more than one position at a time, potentially using multiple spokespeople to add layers of ambiguity. It is also possible for a state to change its public position, perhaps beginning with an ambiguous non-denial denial like Foreign Minister Lavrov's, then issuing clearer acceptances or denials of responsibility later. (In this case, President Putin chose to accept and deny.) The possibility of multiple voices, contradictory statements, and shifting positions can make it difficult to classify a country's response to cyber accusations as a ‘clear’ non-denial denial or non-comment. This lack of clarity only supports our main point, however. Cyber credit claiming is not a binary as supposed by existing theories. Countries can stake out an ambiguous position and maintain it for an extended period of time if they choose.

Neither confirming nor denying in cyber and conventional conflict

Rhetorical obfuscation of this kind is enabled by the attribution problem that lies at the heart of cyber conflict.Footnote 24 Even if a covert cyber operation is exposed and the victim accuses the perpetrator, the attacker can cast doubt on the victim's claims, sustaining a neither-confirm-nor-deny rhetorical posture more easily than one could in the conventional realm. For instance, the accused perpetrator can suggest that non-state actors in their territory committed the act, that ‘zombie computers’ in their country were coopted by foreign hackers, or that some other state (perhaps even an ally) committed the action.Footnote 25 (This public posturing is distinct from the private claims or denials that the accused country's diplomats may make to their accusers, or the forensic attributions the accuser may make unilaterally – issues we discuss further below.) Cyber scholarship has so far failed to appreciate the importance of public ambiguity, perhaps because researchers have imported existing frameworks from the study of clandestine and covert operations in the physical realm, and these frameworks have tended to think about credit claiming in binary terms. The importation of earlier clandestine/covert operations theories makes sense insofar as many cyberattacks are intended to remain covert and plausibly deniable.Footnote 26 But a covert operations framework developed from the study of physical operations fails to account for the ease of obfuscation in the cyber realm.Footnote 27 Now, more so than in the past, having one's cover blown is just the beginning of the public relations story.

To illustrate the expanded opportunities for obfuscation in cyber, compare the 2007 cyberattack on Estonia with Russian forces’ kinetic attacks on Crimea and eastern Ukraine in 2014. When the so-called ‘little green men’ entered Crimea in February 2014, Russia's role in the conflict was an open secret. Troops wore uniforms stripped of their Russian insignias, but reporters on the scene noted that the soldiers had arrived from Russia with Russian weapons, and were ‘presumably Russian reservists’.Footnote 28 Russia soon acknowledged its undeniable role in the invasion, arguing that it was supporting the self-determination rights of Crimeans.Footnote 29

In contrast, while Russia is widely believed to bear some responsibility for the 2007 cyberattack on the Estonian government and banking system (if only by enabling ‘hacktivists’ angered by Estonia's removal of a pro-Soviet war monument), Russia has not admitted its role. The Kremlin's public response to accusations of responsibility was a mix of denials and non-denial denials.Footnote 30 A spokesperson called Russian involvement ‘out of the question’. However, a forensic investigation identified a Russian official's IP address as a source of the attack. Publicly confronted with this information, the official did not deny involvement, but rather pointed to the difficulty of attribution: ‘We know about the allegations, of course, and we checked our IP addresses … Our names and contact numbers are open resources. I am just saying that professional hackers could have easily used our IP addresses to spoil relations between Estonia and Russia.’Footnote 31 Russian ambassador to the EU Vladimir Chizhov's response to the allegations could similarly be called a non-denial denial: ‘If you are implying [the attacks] came from Russia or the Russian government, it's a serious allegation that has to be substantiated. Cyber-space is everywhere. I don't support such behaviour, but one has to look at where they [the attacks] came from and why.’Footnote 32

Russia's apparent cyberattacks on the neighbouring country of Georgia also highlight the ease of sustaining a non-denial denial posture in this realm. Both prior to and during Russia's ground invasion of Georgia in August 2008, Distributed Denial of Service (DDoS) attacks against state websites compromised the Georgian government's ability to communicate with citizens.Footnote 33 The timing of the attacks was suspicious, suggesting that the Russian government had assisted the perpetrators in identifying targets and timing their attacks to coordinate with the invasion. The New York Times reported that ‘in the run-up to the start of the war over the weekend, computer researchers had watched as botnets were “staged” in preparation for the attack, and then activated shortly before Russian air strikes began’.Footnote 34 Moreover, the StopGeorgia.ru message board used to coordinate individual DDoS attacks ‘went up with a full target list only a few hours after Russian troops crossed the border. This would have taken some preparation and suggests that the site's organisers had been tipped off on the timing of the military operations.’Footnote 35 Although the Kremlin denied involvement, a spokesman for the Russian embassy in Washington would not ‘exclude the possibility’ that Russians upset at the Georgian government had decided to ‘express themselves’ by staging attacks, which happened to coincide with the invasion.Footnote 36

Some possible functions of non-denial denials

So far we have established that non-denial denials and non-comments exist, occupying a gray area between the two extremes already discussed in the literature: public credit claiming and public denials of responsibility. A more interesting theoretical task is to explain why states choose this middle path. We take an inductive approach to answer this question, suggesting two primary functions of ambiguity based on our own analysis of paradigmatic cyber controversies. The specific cases are listed in Table 1, which also identifies the potential functions that non-denial denials and non-comments could plausibly serve for the accused states in each case. The two primary functions we identify in these theory-building cases are enhancing crisis stability and influencing rival states’ perceptions of the accused state's cyber capabilities and resolve. The table also identifies cases where non-denial denials and non-comments were useful in managing alliance dynamics, an issue we explore in greater detail below.

Table 1. Non-denial denials/non-comments and their likely functions.

Function one: Enhancing crisis stability

Non-denial denials and non-comments are useful for short-circuiting public backlash in the state victimised by an attack. A country conducting a limited sabotage campaign is typically not interested in provoking an open conflict. As Thomas Schelling notes, limited military actions, ‘if they definitely would lead to major war … would not be taken’.Footnote 37 There is a risk of conflict if the attack is disclosed publicly and the population of the victimised state demands retaliation. Recall how American jingoes stirred up the Spanish-American War by telling the public to ‘Remember the Maine’.Footnote 38 Even if the leaders of both states – attacker and victim – prefer to settle the matter without escalation, war fever may push leaders to respond.Footnote 39 When a cyber operation is publicly revealed, a non-denial denial or a non-comment encourages the public to ‘Forget the Maine’.Footnote 40 By reducing the likelihood of retaliation, these ambiguous signals make cyber operations more appealing policy tools, mitigating some of their limitations in coercion and costly signalling.Footnote 41

The 2016 Russian election interference case could be viewed in this light. By not accepting responsibility, Putin's government undercut American hawks’ calls for retaliation: rather than having a discussion of what to do about a confirmed Russian hack, Americans had a largely partisan disagreement about whether Russia perpetrated the hack at all, with Democrats more inclined to blame Russia and Republicans less inclined to do so. By discouraging bipartisan consensus, Russia assisted the Trump administration in avoiding an international conflict that neither government wanted. (Meanwhile, Putin's non-denial suggested to Western states and regional rivals that Russia could use similar cyber measures against them if they encroached on Russian interests.) The Chinese non-denial denial in response to allegations of involvement in the Mumbai blackout likewise helped to undercut any political pressure on the Indian government to retaliate, while communicating to Indian cyber officials that Chinese hackers ‘have the capability to do this in times of a crisis’.Footnote 42 (The Indian government's own refusal to comment on 2021 media reports, after initially sounding alarms in November 2020, suggests that the government received the message and seeks to avoid further escalation.)Footnote 43

We can infer a similar crisis management function in the non-denial denials by several countries following revelations of the Stuxnet worm that disrupted Iran's uranium enrichment centrifuges in 2009.Footnote 44 The use of a sophisticated cyber weapon allowed the perpetrators to disrupt Iran's nuclear weapons programme for as long as the worm remained secret. When, two years later, the malicious code ‘leaked’ to computer systems worldwide, the sophistication (and target) of the attack suggested the US and Israel as primary suspects. Neither country would comment specifically on its involvement. However, The New York Times reported:

Officially, neither American nor Israeli officials will even utter the name of the malicious computer program, much less describe any role in designing it. But Israeli officials grin widely when asked about its effects. Mr. Obama's chief strategist for combating weapons of mass destruction, Gary Samore, sidestepped a Stuxnet question at a recent conference about Iran, but added with a smile: ‘I'm glad to hear they are having troubles with their centrifuge machines, and the U.S. and its allies are doing everything we can to make it more complicated.’Footnote 45

Iran appears to have engaged in some limited retaliation, but its non-denial denials, coupled with a muted public reaction by its target, Saudi Arabia, prevented the crisis from escalating further. Iran's attack focused on the Saudi oil company, Aramco, erasing tens of thousands of the company's hard drives and rigging them to display the image of a burning American flag.Footnote 46 The attack was traced to servers in Iran, the target of both Stuxnet and a crippling oil sanctions regime imposed by the US and its Saudi ally.Footnote 47 Much like Russia did when accused of cyberattacks, Iran responded to these stories with non-sequiturs, including the obviously disingenuous claim that Saudi-Iranian relations are actually quite good. The only official Iranian comment came from the press office of the Iranian mission to the United Nations:

[Iran has] never engaged in such attacks against its Persian Gulf neighbours, with which Iran has maintained good neighbourly relations. Unfortunately, wrongful acts such as authorizing the 2010 Stuxnet attack against Iran have set a bad, and dangerous, precedent in breach of certain principles of international law.Footnote 48

The message's sarcasm was obvious: Iran does not have ‘good neighbourly relations’ with Saudi Arabia. The two countries currently back opposing sides in what can be viewed as a proxy civil war in Yemen. In fact, the reference to Stuxnet may be interpreted as a signal of Iran's retaliation against Saudi Arabia and the US. But the Saudi government's own reaction was curiously muted, suggesting that both countries were trying to minimise public reaction.Footnote 49

Non-denial denials and non-comments may also prevent cyberattacks escalating when the victim actually does wish to retaliate. Escalation, a common concern in Cold War nuclear doctrine, has come to prominence again in the context of cyber war.Footnote 50 The risk of escalation may be especially high if the victim of the attack does not possess strong cyber capabilities and would therefore be expected to retaliate with physical violence. (Saudi Arabia is one example.) But escalation is a potential even for countries that do possess strong cyber capabilities. Former US President Barack Obama's 2011 International Strategy for Cyberspace asserts a right to respond to cyberattacks with physical weapons: ‘When warranted, the United States will respond to hostile acts in cyberspace as we would to any other threat to our country … and we recognise that certain hostile acts conducted through cyberspace could compel actions under the commitments we have with our military treaty partners.’Footnote 51 In practical terms, US doctrine is vague as to when escalation is ‘warranted’ and what ‘certain hostile acts’ might include. For some time, the strongest publicly known US response to a cyberattack was Obama's decision to expel 35 Russian operatives from US soil and impose sanctions, following accusations that Russia interfered in the 2016 presidential election.Footnote 52 The US Cyber Command reportedly did not become involved until 2018, and then in a limited capacity, ‘targeting individual Russian operatives to try to deter them from spreading disinformation’ in advance of the 2018 Congressional midterm election.Footnote 53 It is important to note, however, that there has not been consensus among the US public regarding the veracity of the allegations of Russian election meddling or the seriousness of the crime.Footnote 54 Russia's non-denial denials and the polarisation of the American electorate prevented consensus, and this helps to explain the United States’ lack of escalation. It remains to be seen whether consensus among the US public would generate a stronger response and raise the prospect of crisis escalation. What can be said is that states like Russia have an incentive not to accept responsibility outright, and instead to issue non-denial denials or non-comments.

Refusing to confirm or deny responsibility also has upsides when considering international legal implications. Cyber operations are transgressive acts, the revelation of which may carry reputational and legal costs for the guilty party.Footnote 55 For instance, North Korea faced public condemnation and a new round of sanctions by the United States after the US named North Korea as the perpetrator of a 2014 attack on Sony.Footnote 56 An exceptionally destructive cyberattack could be considered to violate Article 2(4) of the United Nations Charter, positioning the UN to legalise a counterattack.Footnote 57 As for the legal implications of lower-level operations, precedents are less definitive on the question of jus ad bellum.Footnote 58 However, there is legal writing regarding justifiable minimal responses. Oona Hathaway et al. argue that if a state claims responsibility for cyberattacks while they are ongoing, the targeted nation may engage in countermeasures, as long as said countermeasures comply with international humanitarian law. Even so, any cyber operation that could justify a countermeasure must at least violate the customary law of nonintervention, if not an armed attack.Footnote 59 On the other hand, the Council of Europe's Cybercrime Convention elides the question of state responsibility for cyber malfeasance.Footnote 60 While it may be the case that state cyber operations could generate complaints based on international telecommunications law, to date this body of law has not caught up to cyber realities.Footnote 61 Nonetheless, the balance of existing law incentivises states to avoid accepting responsibility, and instead to issue non-denial denials or non-comments. One exception in which states are more likely to accept responsibility is when the operation is retaliatory. In such cases, the legal disincentives to claim operations are reduced, and accepting responsibility may help to discourage further attacks by the initial aggressor. The United States’ publicly acknowledged 2018 operation to deter further Russian election meddling is one example.Footnote 62

Ambiguity and crisis management with allies

Refusing to confirm or deny responsibility also helps allies to manage crises instigated by their partners. The cyber realm offers ample opportunity to shrug off commitments to blundering allies if states on both sides of the crisis are willing to play along. Consider a hypothetical case where one ally carries out a cyber operation, expecting that its alliance partner will defend it from any retaliation. Under current international law, states’ obligations to allies facing cyberattacks – or allies accused of having committed attacks – are quite murky. The Tallinn Manual, a primary source of current international legal doctrine on cyber conflict, does not mention allies at all.Footnote 63 A state whose ally carries out an attack must decide for itself whether to defend its partner.

To date, there are no publicly known instances where one state perpetrated a cyber operation and its ally had to decide whether to assist or disown the attacker. Stuxnet is not a good example because the evidence suggests that both the US and Israel were active participants. However, one can imagine a case emerging in the relationship between North Korea (DPRK) and China. Because the alliance treaty between China and North Korea is a defence pact, China would not be obliged to aid the DPRK in the event that North Korea launched a cyberattack.Footnote 64 But if an attack launched by the DPRK prompted large-scale cyber retaliation, the DPRK might reasonably expect China to come to its defence.Footnote 65 These expectations could be mitigated if the retaliating state responded covertly and refused to confirm or deny its involvement in the retaliatory strike. China could leverage the ambiguity over attribution to limit its obligations to aid the DPRK – something also in the best interest of the retaliating state, which would prefer not to provoke any involvement by China. The refusal to confirm or deny involvement thus enables de-escalatory collusion between great power rivals who would prefer that one side's blundering ally be taken to task without precipitating a major conflict.

Non-denial denials and non-comments can also be useful in shifting blame from a politically compromised ally. If two allies are suspects in an attack and one of them cannot afford the diplomatic and legal opprobrium associated with guilt, the other state may refuse to confirm or deny its role. The politically compromised ally can issue a non-denial denial, refuse to comment on its involvement, or deny responsibility outright. Any of these strategies shifts blame away from the compromised state and towards the state with fewer image problems. The joint US-Israeli non-denial denial of Stuxnet appears to embody this logic. Journalists have also noted the alliance dimension of a 2009 ‘cyber militia’ attack on Kyrgyztan's opposition party, which sought to stop the closure of a US military base (a policy favoured by Russia and Kyrgyz president Kurmanbek Bakiyev). Security experts speculated that Russia arranged the operation on behalf of its ally, Bakiyev, himself suspected of election-rigging and earlier cyberattacks on opponents. Other experts speculated that Bakiyev hired the Russian militia to make it appear as if the operation originated outside of the country. Neither Russia nor the Bakiyev government attempted to stop the 14-day attack, and neither made its press representatives available for comment.Footnote 66 This non-comment strategy can benefit alliances by allowing one or more parties to elude (or at least obscure) blame, thereby maintaining their political flexibility on other fronts.

Function two: Manipulating perceptions of capability and resolve

Manipulating perceptions may facilitate coercion in both of its forms: deterrence and compellence. Coercion requires the communication of threats and demands to the target. Schelling describes compellence as ‘initiating an action … that can cease, or become harmless, only if the opponent responds’ with a desired change in behaviour.Footnote 67 Deterrence, in contrast, involves threats alone – the promise to punish the opponent if it undertakes some forbidden behaviour.Footnote 68 The coercion literature generally assumes that clarity is essential in achieving successful outcomes. The opponent must realise who is threatening, the behavioural modification desired, and what punishment to expect if it fails to comply. The opponent must also believe the coercer has the capability and resolve to carry out the threat. If any of these markers is unstated or unclear, the opponent cannot assess the costs and benefits of compliance. The would-be coercer should therefore prefer clarity over uncertainty.Footnote 69 How then, can an ambiguous signal, which leaves uncertainty about one's true capabilities and resolve, lead to coercive success?

Manipulating perceptions in the cyber and kinetic realms

As noted earlier, the difficulty of attributing responsibility for cyber operations allows an accused perpetrator to deflect blame onto other states, ‘hacktivists’, or botnet masters who stage attacks from hijacked government computers. Uncertainty also renders non-denial denials and non-comments particularly useful: because cyber capabilities are non-physical and relatively new, net assessments are surrounded by wide bands of probability. Unlike bombers or ships, cyber weapons cannot be photographed and bean-counted.Footnote 70 The first evidence of a cyber capability may be the active use of that capability in conflict. However, we have yet to see cyber weapons used in major conflicts; we have only seen low-tech cyber tools used as adjuncts to kinetic force in limited conflicts (Russia against Georgia, for instance). Public reporting suggests that major powers possess more destructive capabilities than they have yet used.Footnote 71 States have incentives to maintain secrecy around these weapons so that their opponents do not develop countermeasures against them.Footnote 72 Without observing cyber warfighting capabilities in vivo, it is unclear how destructive they are and what coercive value they may have. It is also unclear whether states are sufficiently resolved to use these capabilities over any given political issue.Footnote 73

Given the uncertainty that already exists, the revelation of a new cyber capability makes a strong impression, causing adversaries to update their probabilistic net assessments. The exposure of cyber operations may even raise a weak state to perceived medium-power status, or raise a medium-power state to perceived high-power status (at least in the cyber realm). The US Office of the Secretary of Defence, for instance, describes North Korean cyber weapons as ‘cost effective … asymmetric, deniable military options’, disproportionately effective compared to its ‘weakened [conventional] force that suffers from logistical shortages, aging equipment, and inadequate training’.Footnote 74

The potential to shape perceptions gives states incentives not to deny responsibility for cyberattacks conducted in the context of kinetic war and limited coercion. It also gives states incentives not to deny responsibility for covert sabotage and espionage if these operations are discovered and revealed publicly. The revelations may have been undesired, but the perpetrators can ‘make lemons into lemonade’ by refusing to rule out their own responsibility. In the future, they will be able to make more credible cyber threats, based on capabilities and resolve they are now suspected to have.Footnote 75

There is a precedent in the conventional realm for states using blown cover and non-denial denials to suggest capabilities they may or may not actually possess. One example occurred as a result of the 1975 revelation of a top secret Central Intelligence Agency (CIA) operation, Project Azorian, which sought to raise a sunken Soviet submarine, K-129, from the Pacific Ocean floor and recover its onboard nuclear missile technologies and codes.Footnote 76 When journalist Harriet Ann Phillippi submitted a Freedom of Information Act (FOIA) request for documents on Project Azorian, CIA lawyers crafted a now-famous response that left open all possibilities:

We can neither confirm nor deny the existence of the information requested but hypothetically if such data were to exist the subject matter would be classified and could not be disclosed.Footnote 77

The US had not, in fact, recovered the submarine, but CIA officials exploited public uncertainty to manipulate their adversaries’ perceptions: if the Soviets thought America had the materials, they would have to revise their nuclear assessments, downgrading their ability to threaten the US or defend themselves. The director of Project Azorian later explained: ‘If we said that we didn't recover any information on Soviet missiles, then that would tell the Soviets that they don't have to worry.’Footnote 78

A similar logic played out in the diplomatic controversy over alleged US cyber sabotage of North Korean ballistic missile tests, which suffered a suspiciously high 88 per cent failure rate from 2014 to 2017. US media attributed these failures to ‘a new innovative approach to foil North Korean missiles with cyber and electronic strikes’.Footnote 79 Given the secrecy involved (and the missiles’ tendency to explode over the ocean) it was difficult to assess the veracity of these reports or to trace any individual failure to US sabotage. Questioned by a reporter about alleged US involvement, Secretary of State Rex Tillerson gave a short and cryptic non-denial denial: ‘Cybertools to disrupt weapons programs – that's another use of the tools.’Footnote 80 Reporters nonetheless noted that potential ‘existence of the American program, and whatever it contributed to North Korea's remarkable string of troubles, appears to have shaken Pyongyang and led to an internal spy hunt’ and a search for ‘ways to defeat a wide array of enemy cyberstrikes’.Footnote 81 As in the case of Project Azorian, the suggestion that America might have compromised a key nuclear capability caused a strategic rival to doubt its own ability to coerce the United States.

China's surreptitious infiltration of New York Times computers in 2015 is another possible example of non-denial denials being used to manipulate perceptions of capability. On the one hand, revelations of Chinese eavesdropping and password thefts were an undesirable outcome, leading to the strengthening of corporate cyber defence, contrary to Chinese interests. On the other hand, revelation of the attack sparked fear of China attacking the US banking system (as Russia did in Estonia in 2007), traffic control systems, emergency response channels, and drones.Footnote 82

Russia's own non-denial denials may likewise serve a capability-signalling function. Denying responsibility for election interference, the attacks on Estonia, or the attacks on Georgia would have suggested (to anyone who believed the denial) that the Russian government lacked the capabilities demonstrated in the attack. Denying involvement would also have suggested a lack of resolve to engage in cyber operations and suffer the international political consequences. In contrast, Russia's public refusal to ‘exclude the possibility’ of involvement in attacks against Georgia and Estonia suggests a willingness to break rules, defy customs, and burn international goodwill when Russia's core interests are at stake. Deliberate omissions, such as the refusal to deny, make strong impressions,Footnote 83 and Russia's ambiguous statements may be even interpreted as tacit acknowledgments of responsibility. Flouting convention may allow Russia to generate diplomatic costs in cyber, a realm where operations are cheap and costly signalling is otherwise quite difficult.Footnote 84

The opportunity to incur costs may explain many of the non-denial denials we see after low-tech operations are exposed. In these cases, the capability-signalling opportunities are limited, because the tools are widely known and available to most states. Russia allegedly used cyber militias against Estonia and Georgia. Similarly, Iran's alleged 2010 attack against the Saudi oil company Aramco was initially deemed too unsophisticated to have been perpetrated by a state.Footnote 85 (News reports attributed the Aramco attack to an Iranian hacktivist group, the Cutting Sword of Justice, but experts noted the organisation's links to the Iranian government and the practical difficulty of a non-state actor conducting the attack without government complicity.)Footnote 86 However, an unimpressive low-tech attack can still earn the perpetrator a great deal of international opprobrium. After revelations of its Sony attack, North Korea faced public condemnation by the international community and a new round of economic sanctions.Footnote 87 Doubts about North Korean culpability gave the country even greater incentives to refuse to deny (rather than utterly denying) responsibility, thereby showing that the country had the resolve to suffer costs in asserting its interests.Footnote 88

Incentives for non-denial denials and non-comments increase with a country's number of strategic rivals. A country can claim responsibility privately to one rival.Footnote 89 But a public non-denial denial or non-comment signals to all of a country's rivals that the accused state is willing to suffer reputational costs by carrying out any future cyber threats. Russia is another country with many rivals, globally and locally. The accretion of Russian cyberattacks, many of them accompanied by non-denial denials, prompted the United States and United Kingdom to issue a joint statement condemning the ‘pattern of disruptive and harmful malicious cyber operations … [by] our most capable hostile adversary in cyberspace’.Footnote 90 Although the purpose of the US-UK statement was to demonstrate ‘commitment’ and ‘hold those responsible accountable’, the communication also demonstrated how well Russia had conveyed its capabilities and resolve to the US, the UK, and Russia's other rivals.

The Iranian attack on Aramco provides another example of attacks and non-denial denials signalling resolve to multiple rivals. Although the attack targeted Saudi Arabia, the subsequent Iranian statement referencing Stuxnet struck observers in the United States as a signal to America as well. US Secretary of Defence Leon Panetta described the Aramco attack as ‘a significant escalation of the cyber threat’.Footnote 91 Richard A. Clarke, a former National Security Council member and public commentator on cyber issues, remarked that the ‘attacks were intended to say: “If you mess with us, you can expect retaliation.”’Footnote 92

The role of alliances

Non-denial denials and non-comments may be particularly useful for enhancing the credibility of deterrence in alliance contexts. NATO, for instance, has sought to expand its deterrent agreement, formalising a ‘Cyber Defence Pledge’. Yet alliances are fluid and the distribution of capabilities may influence the credibility of deterrence. Because cyber capabilities are difficult to assess from the outside, the distribution of capabilities among allies is probabilistic. Capabilities are also additive, because computer code can be shared by allies in a non-zero sum fashion. The probable distribution of cyber capabilities is uniquely malleable and may be manipulated to suggest an alternative capability distribution that better serves deterrence.

Consider a case with two allies. In the event that a cyberattack is revealed, an admission of responsibility by one ally associates the demonstrated capability with that state. Although this may take political heat off of the other ally, it sacrifices some of the future signalling benefit the ally might have enjoyed by remaining a suspect. By the same logic, if either state denies responsibility (or exonerates itself forensically), it confers signalling benefits to the other ally. If neither ally confirms or denies responsibility, outside observers assess, with some likelihood, that either state – or both states – may have the capability demonstrated in the attack. Both states enjoy some of the signalling benefit. One may view American and Israeli behaviour after the Stuxnet revelation in this light. As long as the United States and Israel both refused to confirm or deny authorship of Stuxnet, Iran had to confront the likelihood that both countries were involved and both possessed sophisticated cyber capabilities for deterrent purposes.

Non-denial denials and non-comments mitigate an important problem noted by Mancur Olson Jr and Richard Zeckhauser in their analysis of alliances.Footnote 93 In alliances with asymmetric power distributions, the weaker members have incentives to free ride, forcing their stronger allies to spend more on military programmes, the deterrent effect of which benefits all members. This imposes economic costs on the strongest ally and it raises questions about the alliance's ability to deter aggression. If the weak state is not essential to the strong state's survival, an asymmetric distribution of power increases the risk of an aggressor attacking a weak state under the assumption that the strong state will not rush to the ally's defence. Suggesting that a weak ally possesses a previously unknown weapon does not actually increase the ally's capability. It may, however, force opponents to revise their estimates of the ally's power. For the alliance's stronger members, there is little opportunity cost to this strategy. An incremental increase in the strong ally's capability does not change outsiders’ existing perception of the state as strong. Even if there is doubt about the strong ally's commitment to the weaker state, the weaker state is now perceived as better equipped to defend itself unilaterally, and the alliance's deterrent threats are more credible. A strong state allied with a weaker power enhances deterrence by issuing denials and allowing its ally to issue non-denial denials or non-comments. Alternatively, both allies can refuse to confirm or deny involvement in the attack.

Conclusion

States would generally prefer to avoid the application of kinetic force in pursuit of their aims. Cyber operations offer a non-kinetic option, but they also present a conundrum. Insofar as the threat of military action is a critical strategy in international relations, one would expect states to take credit for cyber operations once they are reported publicly. Yet explicitly taking credit in public may promote crisis escalation. One solution to this problem is the refusal to confirm or deny involvement. States may use non-denial denials and non-comments to mitigate the possibility of escalation and avoid legal culpability for their actions, while inflating perceptions of their capabilities and resolve.

We have focused on non-denial denials and non-comments in cyber because this realm offers greater opportunities to use such strategies. Opportunities exist in the kinetic realm as well, whenever there is unusually high uncertainty about capabilities or resolve. The CIA's ‘neither confirm nor deny’ response to Project Azorian's revelation is one example. Our findings therefore contribute to the broader literature on deterrence and compellence.

Our analysis may also be extended to suggest possible explanations for non-denial denials and non-comments other than those we have discussed here for inductive theory-building purposes. As Table 1 shows, a refusal to confirm or deny responsibility may serve more than one function. For instance, the joint US/Israeli non-denial-denials of the Stuxnet attack may be seen as forcing Iran to revise upward its assessment of both states’ capabilities and resolve, averting legal consequences (particularly important for a country with as many adverse legal judgements as Israel), and deflecting the likely Iranian public outcry for revenge. A qualitative investigation into new cases could reveal whether our intuitions regarding the non-denial denials’ functions are correct. For instance, are we more likely to see non-denial denials and non-comments when the capabilities demonstrated are particularly novel? Are they more likely in asymmetric alliances – and when the accused states’ capabilities or resolve have been previously questioned? An investigation of new cases could also reveal whether non-denial denials and non-comments work as intended: are the responses what the perpetrators would hope? The Stuxnet attack, unclaimed but not denied by the US and Israel, has not led to crisis escalation beyond the possibly retaliatory Aramco attack a year later. Are other non-denial denials similarly effective?

In theory, some of our predictions also could be subjected to quantitative analysis. For example, we suggest that non-denial denials and non-comments will be especially frequent among states with many strategic rivals, in cases of asymmetric alliances, and in especially tense great power rivalries. These conjectures will be difficult to validate conclusively. If a state is not willing to acknowledge responsibility for an attack it is also unlikely to explain the rationale behind its choice of rhetoric. A further challenge to hypothesis testing relates to selection effects. Ideally, we would like to understand when cyber operations generate non-denial denials and non-comments versus straightforward denials or outright acceptances of responsibility. (We expect such clear signals to be infrequent, except perhaps in retaliatory attacks.) Because non-denial denials and non-comments are by definition ex post, they are only issued in response to accusations. Some targets of cyberattacks might prefer to keep their own vulnerability secret, and thus may not make public accusations. Moreover, cyber operations that are meant to stay secret, and which do stay secret, will never generate accusations in the first place. The sensational nature of other attacks (website defacings, for instance) suggests that they are meant as a signal to a wider audience, even if the perpetrators do not admit responsibility. Although empirical analyses of cyber conflict will always be haunted by selection problems, a theoretical framework can bring us closer to understanding non-denial denials, non-comments, and other public signals regarding attribution. In turn, the framework brings us closer to understanding the utility of cyber weapons.

Non-denial denials and non-comments are the most frequent public responses by states accused of cyberattacks. It is therefore worth considering the future of these strategies. Their prevalence may be a function of the relative novelty of cyber capabilities. Target and attacker states are learning which strategies work in this gray zone of conflict. It may be that as the norms of cyber conflict emerge, the prevalence of non-denial denials and non-comments will decline. We do not hold this view, however. Because we continue to observe non-denial denials and non-comments in the context of kinetic conflict and because these strategies are of particular utility in the cyber realm, we suspect that they are here to stay.

Acknowledgements

We thank Peter Rodgers, Allison Hostetlder, Corey Gayheart, and William Benoit for excellent research assistance.

Supplementary material

To view the online supplementary material, please visit: https://doi.org/10.1017/eis.2021.18

Joseph M. Brown is Associate Professor of Political Science at the University of Massachusetts Boston. His research and teaching focus on international security, terrorism, protest, and state repression.

Tanisha M. Fazal is Professor of Political Science at the University of Minnesota. Her research and teaching focus on sovereignty, international law, medical care in conflict zones, and armed conflict. From 2021–3, she is also an Andrew Carnegie Fellow.

References

1 Arquilla, John and Ronfeldt, David, ‘Cyber is coming!’, Comparative Strategy, 12:2 (1993), pp. 141–65CrossRefGoogle Scholar; Borghard, Erica D. and Lonergan, Shawn W., ‘The logic of coercion in cyberspace’, Security Studies, 26:3 (2017), pp. 452–81CrossRefGoogle Scholar; Buchanan, Ben, ‘Cyber deterrence isn't MAD; it's mosaic’, Georgetown Journal of International Affairs, 4 (2014), pp. 130–40Google Scholar; Liff, Adam P., ‘Cyberwar: A new absolute weapon? The proliferation of cyberwarfare capabilities and interstate war’, Journal of Strategic Studies, 35:3 (2012), pp. 401–28CrossRefGoogle Scholar; Nye, Joseph S. Jr, ‘Deterrence and dissuasion in cyberspace’, International Security, 41:3 (2017), pp. 4471CrossRefGoogle Scholar; Michael Poznansky and Evan Perkoski, ‘Attribution and secrecy in cyberspace’, War on the Rocks blog (8 March 2016), available at: {http://warontherocks.com/2016/03/attribution-and-secrecy-in-cyberspace/} accessed 17 March 2020; Poznansky, Michael and Perkoski, Evan, ‘Rethinking secrecy in cyberspace: The politics of voluntary attribution’, Journal of Global Security Studies, 3:2 (2017), pp. 402–16CrossRefGoogle Scholar.

2 Brantly, Aaron F., ‘Aesop's wolves: The deceptive appearance of espionage and attacks in cyberspace’, Intelligence and National Security, 31:5 (2015), pp. 112Google Scholar; Ben Buchanan, The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations (New York, NY: Oxford University Press, 2017); Lindsay, Jon R., ‘Tipping the scales: The attribution problem and the feasibility of deterrence against cyberattack’, Journal of Cybersecurity, 1:1 (2015), pp. 5367Google Scholar; Rid, Thomas and Buchanan, Ben, ‘Attributing cyber attacks’, Journal of Strategic Studies, 38:1–2 (2015), pp. 437CrossRefGoogle Scholar.

3 Carson, Austin, ‘Facing off and saving face: Covert intervention and escalation management in the Korean War’, International Organization, 70:1 (2016), pp. 103–31CrossRefGoogle Scholar.

4 Thomas C. Schelling, Arms and Influence (New Haven, CT: Yale University Press, 1966).

5 Art, Robert J., ‘To what ends military power?’, International Security, 4:4 (1980), pp. 335CrossRefGoogle Scholar.

6 Poznansky and Perkoski, ‘Rethinking secrecy in cyberspace’, p. 406. See also Buchanan, ‘Cyber deterrence isn't MAD’; Nye, ‘Deterrence and dissuasion in cyberspace’.

7 Borghard and Lonergan, ‘The logic of coercion in cyberspace’. On costly signalling more generally, see Fearon, James D., ‘Signaling foreign policy interests: Tying hands versus sinking costs’, Journal of Conflict Resolution, 41:1 (1997), pp. 6890CrossRefGoogle Scholar.

8 Borghard and Lonergan, ‘The logic of coercion in cyberspace’; Poznansky and Perkoski, ‘Rethinking secrecy in cyberspace’.

9 Non-denial denials may include mixed signals that confirm and deny responsibility simultaneously.

10 Brendan Valeriano and Ryan C. Maness, Cyber War versus Cyber Realities (Oxford, UK: Oxford University Press, 2015). See the supplementary material for cases sampled. For several examples of actors accepting responsibility for cyberattacks (although often the actors are not states and the attacks occurred in the context of open kinetic war), see Kostyuk, Nadiya and Zhukov, Yuri M., ‘Invisible digital front: Can cyber attacks shape battlefield events?’, Journal of Conflict Resolution, 63:2 (2019), pp. 317–47CrossRefGoogle Scholar.

11 Borghard and Lonergan, ‘The logic of coercion in cyberspace’; Buchanan, ‘Cyber deterrence isn't MAD’; Nye, ‘Deterrence and dissuasion in cyberspace’; Poznansky and Perkoski, ‘Rethinking secrecy in cyberspace’.

12 Borghard and Lonergan, ‘The logic of coercion in cyberspace’; Buchanan, ‘Cyber deterrence isn't MAD’; Nazli Choucri, Cyberpolitics in International Relations (Cambridge, MA: MIT Press, 2012); Finnemore, Martha and Hollis, Duncan B., ‘Constructing norms for global cybersecurity’, American Journal of International Law, 110:3 (2016), pp. 425–79CrossRefGoogle Scholar; Gartzke, Erik, ‘The myth of cyberwar: Bringing war in cyberspace back down to Earth’, International Security, 38:2 (2013), pp. 4173CrossRefGoogle Scholar; Kostyuk and Zhukov, ‘Invisible digital front’; Liff, ‘Cyberwar: A new absolute weapon?’; Nye, ‘Deterrence and dissuasion in cyberspace’; Poznansky and Perkoski, ‘Rethinking secrecy in cyberspace’; Rid, Thomas, ‘Cyber war will not take place’, Journal of Strategic Studies, 35:1 (2012), pp. 532CrossRefGoogle Scholar.

13 North Atlantic Treaty Organization (NATO), ‘Cyber Defence Pledge’, press release (8 July 2016), available at: {https://www.nato.int/cps/en/natohq/official_texts_133177.htm} accessed 17 March 2020.

14 Office of the Director of National Intelligence, Background to: ‘Assessing Russian Activities and Intentions in Recent US Elections’, unclassified intelligence community assessment (6 January 2017), available at: {https://www.dni.gov/files/documents/ICA_2017_01.pdf} accessed 17 March 2020.

15 Nicole Perlroth, Michael Wines, and Matthew Rosenberg, ‘Russian election hacking efforts, wider than previously known, draw little scrutiny’, New York Times (1 September 2017).

16 Abigail Tracy, ‘Russia's foreign minister trolls 2016 election, calls both sides “p—s”’, Vanity Fair (12 October 2016).

17 Scott Shane, ‘The fake Americans Russia created to influence the election’, New York Times (7 September 2017).

18 Roland Oliphant, ‘Vladimir Putin says patriotic “artist” hackers may have attacked West on their own initiative’, Telegraph (1 June 2017).

19 Joe Uchill, ‘Putin: “Read my lips”, election interference claims are lies’, Hill (30 March 2017).

20 ‘Here's what Trump and Putin actually said in Helsinki: The press conference transcript – and what the White House edited out’, Foreign Policy (17 July 2018).

21 David E. Sanger and Emily Schmall, ‘China appears to warn India: Push too hard and the lights could go out’, New York Times (28 February 2021).

22 Sahil Joshi and Divyesh Singh, ‘Mega Mumbai power outage may be result of cyberattack, final report awaited’, India Today (20 November 2020); Sanger and Schmall, ‘China appears to warn India’.

23 Wang Wenbin, ‘Foreign Ministry Spokesperson Wang Wenbin's Regular Press Conference on March 1, 2021’, available at: {https://www.fmprc.gov.cn/mfa_eng/xwfw_665399/s2510_665401/2511_665403/t1857624.shtml} accessed 21 June 2021.

24 Rid and Buchanan, ‘Attributing cyber attacks’.

25 Borghard and Lonergan, ‘The logic of coercion in cyberspace’; Tim Maurer, Cyber Mercenaries: The State, Hackers, and Power (Cambridge, UK: Cambridge University Press, 2018).

26 Poznansky and Perkoski, ‘Attribution and secrecy in cyberspace’; Poznansky and Perkoski, ‘Rethinking secrecy in cyberspace’.

27 For one exception, see Carson, Austin and Yarhi-Milo, Keren, ‘Covert communication: The intelligibility and credibility of signaling in secret’, Security Studies, 26:1 (2017), pp. 124–56CrossRefGoogle Scholar.

28 John Simpson, ‘Russia's Crimea plan detailed, secret and successful’, BBC News (19 March 2014), available at: {http://www.bbc.com/news/world-europe-26644082} accessed 18 March 2020.

29 ‘Vladimir Putin admits Russian forces helped Crimea separatists’, NBC News (17 April 2014), available at: {http://www.nbcnews.com/storyline/ukraine-crisis/vladimir-putin-admits-russian-forces-helped-crimea-separatists-n82756} accessed 18 March 2020.

30 Herzog, Stephen, ‘Revisiting the Estonian cyber attacks: Digital threats and multinational responses’, Journal of Strategic Security, 4:2 (2011), pp. 4960CrossRefGoogle Scholar; Rain Ottis, Analysis of the 2007 Cyber Attacks Against Estonia from the Information Warfare Perspective (Tallinn: Cooperative Cyber Defence Centre of Excellence, 2008), available at: {https://www.etis.ee/File/DownloadPublic/b924739a-01f6-4867-8e86-1d4527c22e31?name=Fail_2008_ECIW_Ottis.pdf&type=application%2Fpdf} accessed 18 March 2020; Jason Richards, ‘Denial-of-service: The Estonian cyberwar and its implications for U.S. national security’, International Affairs Review, 18 (2009).

31 Peter Finn, ‘Cyber assaults on Estonia typify a new battle tactic’, Washington Post (19 May 2007).

32 Ian Traynor, ‘Russia accused of unleashing cyberwar to disable Estonia’, Guardian (16 May 2007). Estonia did not return the Soviet monument to its original location, instead opting to shore up its alliances with partners in the OSCE and NATO. NATO's new cyber resources have since been brought to bear in response to alleged Russian cyberattacks elsewhere in the post-Soviet sphere. See Reuters, ‘Estonia Calls for EU Law to Combat Cyber Attacks’ (12 March 2008), available at: {http://www.reuters.com/article/us-estonia-interview-idUSL1164404620080312} accessed 21 June 2021; Bobbie Johnson, ‘No one is ready for this’, Guardian (15 April 2009), available at: {http://www.theguardian.com/technology/2009/apr/16/internet-hacking-cyber-war-nato} accessed 21 June 2021.

33 Kim Hart, ‘Longtime battle lines are recast in Russia and Georgia's cyberwar’, Washington Post (14 August 2008).

34 John Markoff, ‘Before the gunfire, cyber attacks’, New York Times (13 August 2008).

35 Adam Segal, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age (New York, NY: Public Affairs, 2016). The combination of kinetic and cyberattacks is consistent with scholarly arguments regarding the role of cyber weapons as adjuncts to kinetic force. See Gartzke, ‘The myth of cyberwar’. Other scholars argue that cyber weapons are not yet effective coercive tools in war. See Borghard and Lonergan, ‘The logic of coercion in cyberspace’ and Kostyuk and Zhukov, ‘Invisible digital front’.

36 Markoff, ‘Before the gunfire, cyber attacks’.

37 Schelling, Arms and Influence, p. 104.

38 John L. Offner, An Unwanted War: The Diplomacy of the United States and Spain over Cuba (Chapel Hill, NC: University of North Carolina Press, 1992), p. 153.

39 For elites’ attempts to reduce public support for kinetic retaliation, see Jacquelyn Schneider, ‘Cyber and crisis escalation: Insights from Wargaming’, article under review (n.d.), p. 36.

40 For an analysis of such dynamics in the Cold War, see Austin Carson, ‘Facing off and saving face: Covert intervention and escalation management in the Korean War’, International Organization, 70:1 (2016), pp. 103–31. In future research, it could be interesting to test how easily audiences ‘forget the Maine’ after a non-denial denial, versus other rhetorical responses to the accusation.

41 Borghard and Lonergan, ‘The logic of coercion in cyberspace’.

42 Sanger and Schmall, ‘China appears to warn India’.

43 Ibid.

44 For a summary of Stuxnet, see Rebecca Slayton, ‘What is the cyber offense-defense balance? Conceptions, causes, and assessment’, International Security, 41:3 (2017), pp. 72–109.

45 William J. Broad, John Markoff, and David E. Sanger, ‘Israeli test on worm called crucial in Iran delay’, New York Times (15 January 2011).

46 Nicole Perlroth, ‘In cyber attack on Saudi firm, U.S. sees Iran firing back’, New York Times (23 October 2012).

47 The name of the erasing mechanism used in the attack (‘Wiper’) appeared to confirm Iranian involvement, recalling the ‘Wiper’ mechanism used to attack Iranian oil companies years earlier. The use of identical nomenclature is interpreted by some experts as a technological means for Iran to ‘tip its hand’ without explicitly taking credit. (See Perlroth, ‘In cyberattack on Saudi firm, U.S. sees Iran firing back’.) Indeed, a leaked National Security Agency memo attributed the attack to Iran and argued that Iran had learned from the earlier attack on its own oil industry. See ‘Iran – Current Topics, Interaction with GCHQ’, 12 April 2013 memo by US National Security Agency, published by The Intercept (10 February 2015), available at: {https://theintercept.com/document/2015/02/10/iran-current-topics-interaction-gchq/} accessed 21 June 2021.

48 Nicole Perlroth and David E. Sanger, ‘New computer attacks traced to Iran, officials say’, New York Times (24 May 2013).

49 David E. Sanger, ‘Document reveals growth of cyberwarfare between the U.S. and Iran’, New York Times (22 February 2015).

50 Schneider, ‘Cyber and crisis escalation’.

51 Barack H. Obama, International Strategy for Cyberspace: Prosperity, Security, and Openness in a Networked World (May 2011), p. 14, available at: {https://obamawhitehouse.archives.gov/sites/default/files/rss_viewer/international_strategy_for_cyberspace.pdf} accessed 19 March 2020.

52 Mark Mazzetti and Adam Goldman, ‘“The game will go on” as U.S. expels Russian diplomats’, New York Times (2 January 2016).

53 Julian E. Barnes, ‘U.S. begins first cyberoperation against Russia aimed at protecting elections’, New York Times (23 October 2018).

54 Alex Ward, ‘There is more evidence Russia interfered in the election: Fewer Trump supporters believe it’, Vox (18 July 2017).

55 Heather Harrison Dinniss, Cyber Warfare and the Laws of War (New York, NY: Cambridge University Press, 2012).

56 ‘Sony cyber-attack: North Korea faces new US sanctions’, BBC News (3 January 2015), available at: {http://www.bbc.com/news/world-us-canada-30661973} accessed 10 August 2021. Note that there have been questions raised about North Korea's culpability in this case; see Nicole Perlroth, ‘New study may add to skepticism among security experts that North Korea was behind Sony hack’, New York Times (24 December 2014).

57 Michael N. Schmitt (ed.), Tallinn Manual on the International Law Applicable to Cyber Warfare (Cambridge, UK: Cambridge University Press, 2013); Matthew C. Waxman, ‘Cyber-attacks and the use of force: Back to the future of Article 2(4)’, Yale Journal of International Law, 36:2 (2011), pp. 421–59.

58 Oona Hathaway, Rebecca Crootof, Philip Levitz, Haley Nix, Aileen Nowlan, William Perdue, and Julia Spiegel, ‘The law of cyber-attack’, California Law Review, 100:4 (2012), pp. 817–86 (p. 861). We have clearer expectations regarding how best to respond to cyberattacks in the context of jus in bello, or the conduct of hostilities. See Schmitt, Tallinn Manual on the International Law Applicable to Cyber Warfare.

59 Hathaway et al., ‘The law of cyber-attack’, p. 857.

60 Ibid., p. 862.

61 Ibid., pp. 867–73.

62 Barnes, ‘U.S. begins first cyberoperation against Russia’.

63 Schmitt, Tallinn Manual on the International Law Applicable to Cyber Warfare.

64 See Leeds, Brett Ashley, Ritter, Jeffrey, Mitchell, Sarah, and Long, Andrew, ‘Alliance treaty obligations and provisions, 1815–1944’, International Interactions, 28:3 (2002), pp. 237–60CrossRefGoogle Scholar (case 3445, the Chinese-North Korea Defence Pact).

65 This response would depend on the specific terms of the defence pact. NATO, for example, has determined that a cyberattack would not activate Article 5 of the North Atlantic Treaty (which invokes collective defence), but would instead only activate Article 4, which requires consultation but no further obligation. See Hathaway et al. ‘The law of cyber-attack’, p. 861.

66 Kozlowski, Andrzej, ‘Comparative analysis of cyberattacks on Estonia, Georgia and Kyrgyzstan’, European Scientific Journal, 3 (2014), pp. 237–45Google Scholar; Robert Mackey, ‘Are “cyber-militias’ attacking Kyrgyzstan?’, New York Times (‘The Lede’ blog) (5 February 2009), available at: {http://thelede.blogs.nytimes.com/2009/02/05/are-cyber-militias-attacking-kyrgyzstan/?_r=0} accessed 21 June 2021; Christopher Rhoads, ‘Kyrgyzstan knocked offline’, Wall Street Journal (28 January 2009).

67 Schelling, Arms and Influence, pp. 79–80. For more recent work on compellence, see especially Sechser, Todd S., ‘Goliath's curse: Coercive threats and asymmetric power’, International Organization, 64:4 (2010), pp. 627–60CrossRefGoogle Scholar.

68 Huth, Paul K., ‘Deterrence and international conflict: Empirical findings and theoretical debates’, Annual Review of Political Science, 2 (1999), pp. 2548CrossRefGoogle Scholar.

69 Schelling, Arms and Influence, p. 88.

70 Borghard and Lonergan, ‘The logic of coercion in cyberspace’, p. 465.

71 David E. Sanger and Mark Mazzetti, ‘U.S. had cyberattack plan if Iran nuclear dispute led to conflict’, New York Times (16 February 2016).

72 Borghard and Lonergan, ‘The logic of coercion in cyberspace’; Buchanan, ‘Cyber deterrence isn't MAD’.

73 Borghard and Lonergan, ‘The logic of coercion in cyberspace’.

74 Office of the Secretary of Defence, Military and Security Developments Involving the Democratic People's Republic of North Korea, Annual Report to United States Congress (2013), pp. 11–12, available at: {https://fas.org/irp/world/dprk/dod-2013.pdf} accessed 23 March 2020.

75 For a related discussion of the signalling benefits of covert kinetic action, see Carson and Yarhi-Milo, ‘Covert communication’.

76 Julia Barton, ‘Neither Confirm Nor Deny’, Radiolab (National Public Radio), 12 February 2014.

77 Ibid.

78 Ibid.

79 David E. Sanger and William J. Broad, ‘Hand of U.S. leaves North Korea's missile program shaken’, New York Times (18 April 2017).

80 Ibid.

81 Ibid.

82 Barrie Barber, ‘Cyber war being lost, some experts now fear’, Dayton Daily News (11 September 2015).

83 Robert L. Jervis, The Logic of Images in International Relations (New York, NY: Columbia University Press, 1970).

84 Borghard and Lonergan, ‘The logic of coercion in cyberspace’.

85 For example, Kaspersky Lab argued that the Aramco attack was too unsophisticated to be attributed to a state. The Cutting Sword of Justice, an Iranian hacktivist group widely considered to be state-sponsored, claimed credit for the attack. See Kim Zetter, ‘The NSA acknowledges what we all feared: Iran learns from US cyber attacks’, Wired (2 October 2015).

86 Lorenzo Francheschi-Bicchierai, ‘There's evidence that the “Yemen Cyber Army” is actually Iranian’, Motherboard (26 June 2015). A former State Department official explained: ‘How could you do something that consumed a massive amount of bandwidth in Iran and not have the government notice, when it's monitoring the Internet for political purposes?’ See ‘U.S. says Iran behind cyber attack in Saudi Arabia’, Al-Arabiya News (13 October 2012).

87 ‘Sony cyber-attack: North Korea faces new US sanctions’, BBC News (3 January 2015), available at: {http://www.bbc.com/news/world-us-canada-30661973} accessed 21 June 2021.

88 Nicole Perlroth, ‘New study may add to skepticism among security experts that North Korea was behind Sony attack’, New York Times (24 December 2014).

89 Poznansky and Perkoski, ‘Rethinking secrecy in cyberspace’.

90 National Cyber Security Centre, Joint US-UK Statement on Malicious Cyber Activity Carried out by Russian Government (15 April 2018), available at: {https://www.ncsc.gov.uk/news/joint-us-uk-statement-malicious-cyber-activity-carried-out-russian-government} accessed 23 March 2020.

91 Perlroth, ‘In cyberattack on Saudi firm, U.S. sees Iran firing back’.

92 Ibid.

93 Mancur Olson Jr and Richard Zeckhauser, An Economic Theory of Alliances (Santa Monica, CA: RAND Corporation, 1966).

Figure 0

Table 1. Non-denial denials/non-comments and their likely functions.

Supplementary material: File

Brown and Fazal supplementary material

Brown and Fazal supplementary material

Download Brown and Fazal supplementary material(File)
File 25.3 KB