The onset of costly interstate wars, despite the existence of more efficient negotiated solutions, is a longstanding puzzle in international conflict research. Much of the literature points to private information as a central cause for these bargaining failures. The argument is typically that one or both states misestimate resolve, preferences, costs or the distribution of capabilities, and that this asymmetric information can lead both to expect a positive utility for war.
Often, however, the uncertainty is not about capabilities or resolve, but rather about uncertainty itself: how much does the enemy know, how much do they know about what I know, about what I know that they know, and so on. In other words, both parties are often well aware of what the other has or wants (first-order uncertainty), but not of what the other knows (second-order uncertainty). In the lead-up to the 2003 invasion of Iraq, for example, much of the uncertainty was about what the other knew itself. Saddam Hussein was not sure whether the United States knew about the absence of weapons of mass destruction, and he also worried about what Iran knew about Iraq’s capabilities. This uncertainty about uncertainty itself contributed to misunderstandings, delays in complying and ultimately war.
Unfortunately, existing models are ill equipped to address this type of situation. The asymmetric information included in crisis bargaining models is typically limited to payoff-relevant events fundamentals such as capabilities, costs or preferences. What players do or do not know, however, is almost always assumed to be common knowledge.Footnote 1 States may not know what the other has, but they know whether she knows. There is, in other words, no uncertainty about the presence of uncertainty.
In the real world, however, diplomats constantly report about their foreign counterparts’ thoughts and knowledge, and double agents feed the enemy potentially erroneous information about what is known and believed, all the while guessing whether their opponent’s spies succeeded in acquiring secret information. The success of signals also depends on the other’s understanding, as well as my understanding of their understanding, and so on. What the other knows, in other words, is itself private and potentially strategic information.
I show here that bargaining may in fact break down into war simply because of uncertainty about uncertainty itself, even if both states are mutually aware of each other’s capabilities, costs and resolve. The intuition for this result is simple: suppose that A thinks B might be mistaken about the distribution of power (even if she is not). For example, A might think that B thinks A has developed nuclear capabilities, that is, that the balance of power is less favourable to B than it actually is. If it is sufficiently probable that B is mistaken, then A has an incentive to offer her a small share of the pie. But because there is a chance that B actually knows the truth, B might in fact prefer war to that offer.
This result has important implications. If war is caused by incomplete information about fundamental attributes such as capabilities, resolve or costs, then transparency or credible signals should solve the problem of inefficient bargaining outcomes. But if complete information about the distribution of information is also necessary, then the conditions for the absence of conflict are more stringent than previously thought. In particular, it implies that states must not only gather and convey first-order information–about their opponents’ capabilities and resolve–but also second- or higher-order information: information about what the opponent knows about her, and possibly what he knows she knows about her, and so on. They also must be able to convey that information about what they know or do not know, each time with a chance of failure.
The result is also relevant for the empirical literature. Existing empirical evidence of the role of uncertainty in the onset of war relies on estimates of first-order uncertainty–for example, using secret mobilization–and their correlations with the onset of conflict.Footnote 2 However, the present article suggests that these inferences may be incorrect if information about what the other knows is not taken into account.
In sum, this note shows the importance of higher-order uncertainty for the onset of war. The possibility of uncertainty alone may lead two rational and unitary actors to fight, even if they know each other’s capabilities and resolve. The article proceeds in three steps. First, the role of information and common knowledge in the bargaining literature is discussed. Secondly, a simple model is introduced in which war occurs with positive probability in all equilibria simply because of uncertainty about uncertainty itself. The logic of the argument is illustrated using the case of the lead-up to the 2003 Iraq War. Having established the importance of higher-order information, I finally address some of the challenges associated with reaching common knowledge.
Uncertain Uncertainty
The role of information in interstate relations has been at the core of the research on conflict over the past thirty years. The central explanation for the onset of war between rational actors is the idea that at least one player has incomplete information about some of their opponent’s attributes such as their capabilities, resolve or costs for war (Blainey Reference Blainey1988; Fearon Reference Fearon1995; Jervis Reference Jervis1976; Powell Reference Powell1996; Reiter Reference Reiter2003).Footnote 3 Private information and incentives to misrepresent them, in turn, lead to misperceptions or miscalculations about the distribution of power or resolve. Negotiators may thus be optimistic about the expected outcome of a war, with the result that both believe they have a higher expected value for war than for peace (Fey and Ramsay Reference Fey and Ramsay2007; Slantchev and Tarar Reference Slantchev and Tarar2011).Footnote 4
The type of uncertainty that is typically modelled, however, relates only to payoffs. The games include some uncertainty space corresponding to payoff-relevant attributes, together with the information players have about these attributes. Blainey’s often-cited argument, for example, is that ‘wars usually begin when two nations disagree on their relative strength’ (Blainey Reference Blainey1988, 246). Similarly, Fearon (Reference Fearon1995, 18) focuses on ‘disagreements about relative power and uncertainty about a potential opponent’s willingness to fight’.Footnote 5 In these models players may be unsure of each other’s attributes such as the distribution of capabilities or interest, or the cost of war, but they know whether each one is informed of them.Footnote 6 In other words, the type of uncertainty that is discussed is of the form ‘A does not know X, and B knows that A does not know X’. The fact that information is asymmetric is common knowledge, so states only face what may be called first-order uncertainty–uncertainty about fundamentals, but not about the information partition itself.Footnote 7
In the real world, however, states rarely know for sure how much the other knows. Intelligence failures and the challenges associated with credibly conveying private information imply that states must almost always guess what the other has learned or inferred. For example, has the adversary discovered my secret nuclear programme (or the absence thereof)? Are enemy spies leaking information? Is my phone tapped? And what is made of that information? Did they believe it? And did they think that I thought they believed it?
Can bargaining remain efficient if everything is known except whether the other knows? We know from economics that uncertainty about what others know matters.Footnote 8 Yet of interest in the literature has been the opposite scenario: one in which information is incomplete and trade never happens because common knowledge allows the actors to infer from an offer that the other must know something they do not, or else they would not want to trade (Milgrom and Stokey Reference Milgrom and Stokey1982). Through a potentially infinite regress, they infer that the expected utility of the deal cannot be larger than the one of no trade, and hence trade never occurs. Analogous arguments have been discussed with respect to conflict (Fey and Ramsay Reference Fey and Ramsay2007; Slantchev and Tarar Reference Slantchev and Tarar2011). However, the argument in this article is the converse: even with complete information but without common knowledge, war may occur in equilibrium. I now show this using a simple model with only two types and one period. The more general case with an infinite number of types and repeated interactions is discussed below.
Model
Two countries, A and B, negotiate over the partition of a territory of size normalized to one. There are two types of A, denoted by ϕ ∈ {w, s}: a strong A, denoted A s, would win a conflict against B with probability p s, whereas a weak A (A w) would win with probability p w. A’s type is determined by an initial move by Nature.
Both players know the distribution of power, but A is unsure whether B knows it. To represent this uncertainty, there must therefore be states of the world reached with positive probability in which B knows the distribution of power, and others in which B does not. I represent these by a move by Nature, such that instead of having two possible types corresponding to two distributions of power, Nature has four possible moves, corresponding to the combination of A’s two possible types (w and s) and B’s two possible states of knowledge: knows (k) and not knows (nk)–see Figure 1.Footnote 9 In state {w, k}, both players know that A is weak, but A does not know whether B knows it. In state {w, nk}, A knows that she is weak, but B does not; again, A does not know whether B is informed. In state {s, nk}, A knows she is strong but B does not, and A again does not know whether B knows. Finally, in state {s, k}, both are informed that A is strong, but A again does not know whether B knows it. Assuming that all states have positive prior probability (that is, p 1, p 2, p 3, p 4>0), only the entire state space is common knowledge.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_fig1.png?pub-status=live)
Figure 1. The common knowledge game: A does not know whether B knows A’s type.
In short, A always knows her own type, but never knows whether B observes A’s type. Parentheses are used to denote a player’s information partition. For example, ({x} ∨ {y}) denotes an information partition in which a player knows that she is at either x or y, but not which of the two. With probability p 1, for example, the state is w, k, in which case B observes ({w, k}) and hence knows A’s type. In this state, however, A only observes ({w, k} ∨ {w, nk}), and hence assigns positive probability to state {w, nk} being the true state–a state in which B would not know A’s type. Of particular interest here will be the states in which both players mutually know each other’s capabilities, and yet end up fighting. They fight because, despite this mutual knowledge, the game is embedded in a larger structure in which capabilities may not always be known, and the mere possibility of asymmetric information–not asymmetric information itself–is sufficient to cause war.
Following Nature’s move, I assume for simplicity a take-it-or-leave-it bargaining protocol in which A makes an offer x ∈ [0, 1], where x denotes A’s proposed share of the territory (and hence 1−x denotes B’s share). B observes the offer, which she either accepts or rejects. If B accepts, then players receive their respective share of the pie and the game ends. If B rejects, however, then war follows, in which case A wins the entire territory with probability p ϕ (and hence B wins with probability 1−p ϕ), and both players incur cost c ∈ (0, 1). I assume for simplicity of exposition that both players are risk neutral (that is, u i(x)=x).
The intuition for the game’s equilibrium is simple. Suppose that A observes ({w, k} ∨ {w, nk}). This means that A is weak, but also that A is not sure whether or not B knows that she is weak, because A assigns positive probability to {w, nk} being the actual state. Indeed, if {w, nk} were the true state, B would observe ({w, nk} ∨ {s, nk}), and hence would assign positive probability to the possibility that A is strong. In other words, upon observing ({w, k} ∨ {w, nk}), A assigns positive probability to B thinking she is strong. But in that case (that is, if B thinks A might be strong), then B might be willing to accept a distribution of the pie that reflects this. In turn, this means that A could make an offer x corresponding to a strong type, which B would accept given her beliefs. In equilibrium, therefore, A makes an offer that grants herself a large portion of the pie, with the hope that B does not actually observe the true distribution of power. Because B may, however, observe it (since with probability p 1 the state is {w, k}), war occurs with positive probability. For certain combinations of the parameters, I even show that war occurs with positive probability in all perfect Bayesian equilibria of the game.
Whether this can be an equilibrium strategy depends, of course, on the probability of {w, k} being the true state of the world given that A observes ({w, k} ∨ {w, nk}), as well as on {s, k} being the true state of the world when B observes ({w, nk} ∨ {s, nk}). In particular, it is crucial that the probability of being at {w, k} be sufficiently low when A observes ({w, k} ∨ {w, nk}), so that A would want to take the risk to bluff with the hope of not facing a knowledgeable B. It is also necessary that {s, nk} be sufficiently probable for B to accept upon observing ({w, nk} ∨ {s, nk}), or else she would rather take the risk to reject. I now present the logic of the equilibrium in more detail.
Separating equilibrium
Consider first the possibility of an equilibrium in which each type of A makes a different offer. Without a loss of generality, assume that A offers x L upon observing ({w, k} ∨ {w, nk}), but x H upon observing ({s, nk} ∨ {s, k}), where x L (x H) denotes a low (high) offer. In this case, B always infers from A’s offer which type she is facing, and war never occurs. But does such an equilibrium exist? A weak A may be tempted to deviate and offer x H, which B, given her posterior beliefs, will accept upon observing either ({w, nk} ∨ {s, nk}) or ({s, k}). The risk that A runs, of course, is that B actually observes ({w, k}) and hence knows that A is bluffing, in which case war occurs. But if the temptation is sufficiently large and the probability that B discovers the truth (that is, p 1) sufficiently low, then A will be willing to take that risk and bluff. In that case, A deviates from the equilibrium path, and this can therefore not form the basis of a perfect Bayesian equilibrium. There is hence no separating equilibrium (details of the derivation and proofs in the appendix).
Lemma 1. Assume $p^{s} {\,\minus}\,p^{w} \,\gt\,2c\,/\,q_{2}^{A}$, where $q_{2}^{A} \,{\equals}\,p_{2} \,/\,(p_{1} {\,\plus}\,p_{2} )$
. Then there is no peaceful separating perfect Bayesian equilibrium.Footnote 10
This is intuitive. $q_{2}^{A}$ is A’s posterior probability of being at {w, nk} upon observing ({w, k} ∨ {w, nk}). A large $q_{2}^{A}$
means a low probability of being caught deviating, and hence that A will be more willing to deviate.
Pooling equilibrium
Consider now a pooling equilibrium in which both types of A offer x*. Clearly, x* must be large enough to satisfy a strong A, as otherwise A would prefer fighting to that agreement. But if x* is too large, then B will reject it upon observing ({w, nk} ∨ {s, nk}), leading to war–an outcome that neither type of A would like.
Lemma 2. Assume ${{2c} \over {q_{2}^{B} }}\,\gt\,p^{s} {\minus}p^{w} \,\gt\,{{2c} \over {q_{2}^{A} }}$, where $q_{2}^{A} {\,\equals\,}{{p_{2} } \over {p_{1} {\plus}p_{2} }}$
and $q_{2}^{B} {\,\equals\,}{{p_{2} } \over {p_{2} {\plus}p_{3} }}.$
Then there exists a pooling perfect Bayesian equilibriumFootnote 11 in which all types of A offer x*=p s−c, which B accepts unless she observes ({w, k}), in which case she rejects and war ensues.
The main proposition then follows immediately: for certain combinations of p ϕ and probability distributions on Nature’s moves, there are no peaceful equilibria, and war occurs with positive probability.
Proposition 1. Assume ${{2c} \over {q_{2}^{B} }}\,\gt\,p^{s} {\,\minus\,}p^{w} \,\gt\,{{2c} \over {q_{2}^{A} }}.$ Then in every pure strategy Perfect Bayesian Nash equilibrium, war occurs with probability p 1.
This implies that even if both A and B know each other’s attributes (that is, w, k is the true state of the world), war occurs in equilibrium.Footnote 12
Could this inefficiency result be an artefact of the simple bargaining protocol? A first limitation of the one-shot bargaining model above is that it prevents learning. Repeating the game may, for example, allow B to reject without fearing immediate war, and as a result A might learn B’s type by varying her offers over time. This intuition turns out to be correct only under restrictive conditions. To see why, suppose first that the model is modified to include two periods, keeping only the two types of B as above–a knowledgeable and a non-knowledgeable one. In that case, A w can make an offer in the first period such that only the non-knowledgeable B types would accept, and the knowledgeable types would accept in the second period. However this result is dependent on having only two types of B–a very restrictive assumption, as there is typically a continuum of types, and hence an infinite number of them. With two periods and three types, for example (such as knowledgeable, non-knowledgeable and partially knowledgeable types), A may not be able to separate all types. Instead, she may screen out one type in the first period, and be left with two types in the second period–a scenario equivalent to the one in the simple model above, so that war occurs with positive probability. This result is discussed formally in the online appendix.
The reader might think that, with an infinity of periods, A could progressively screen all types of B, until only the knowledgeable type would be left, to whom she could make a generous offer with probability one. This is not the case, however, because B k (the knowledgeable type of B) will not wait forever to be rewarded with that generous offer. If there are too many types to screen, B k would rather fight today than wait for A to iteratively screen out all the other types. This result is discussed more formally in the online appendix.
As another improvement to the model, we could imagine that if B k had the opportunity to tell A that he knows she is weak, then war might be avoided. B k could, for example, make statements about A’s lack of resolve, poor troop training or high costs of war. However, introducing communication between the players has no effect on the equilibrium. Suppose, for example, that B could send a message immediately before A’s move. As long as the announcement has no direct effect on either player’s payoff, then it is easy to show that B’s message also has no effect on the probability of war (see the online appendix for a more formal treatment). The logic is simple. A message could only lower the probability of war by increasing the probability that A, upon receiving that message, would make a generous offer to B k. If so, however, all types of B–both knowledgeable and non-knowledgeable–would want to send that message. This means, however, that the message could not discriminate between types, and hence that it would not lead A to make a generous offer to its sender, contradicting the assumption above.
The 2003 Iraq War
The negotiations over inspections in the lead-up to the 2003 Iraq War illustrate many aspects of this model. To simplify, Iraq (A in the model above) had two possible types: compliant (non-compliant) without (with) weapons of mass destruction (WMDs). There were also two types of opponent (B): a knowledgeable (ignorant) one, who knew about (ignored) the absence of WMDs.
One interpretation is that B is Iran. Indeed, archives reveal that an important cause for Hussein’s choices was his belief that Iran may have thought Iraq had WMDs. In terms of the model above, this is equivalent to A believing that B thinks she is strong and making high offers as a result (that is, refusing inspections).Footnote 13 Hussein himself explicitly stated this reason in his post-capture interrogations:
The threat from Iran was the major factor as to why he did not allow the return of the UN inspectors. Hussein stated he was more concerned about Iran’s discovering Iraq’s weaknesses and vulnerabilities than the repercussions of the United States for his refusal to allow UN inspectors back into Iraq.Footnote 14
Another reading of the events is that B represents the United States. Saddam Hussein knew, for practical purposes, of American military superiority.Footnote 15 Moreover, there is substantial evidence to suggest that the United States also knew of Iraq’s capabilities and (absence of) WMDs. Then CIA Director Tenet testified that ‘We said that Saddam did not have a nuclear weapon and probably would have been unable to make one until 2007 to 2009’. In addition, a 1998 International Atomic Energy Agency (IAEA) report stated that, ‘based on all credible information to date, the IAEA has found no indication of Iraq having achieved its program goal of producing nuclear weapons or of Iraq having retained a physical capability for the production of weapon-usable nuclear material or having clandestinely obtained such material’ (Pfiffner Reference Pfiffner2004).Footnote 16
However, Hussein did not know how much exactly the United States knew–that is, whether he was in state {w, k} or {w, nk}. He had initially assumed that the United States must know that the evidence presented about the WMDs was erroneous (Duelfer and Dyson Reference Duelfer and Dyson2011). Yet statements by the US administration added to the confusion–for example when Bush declared in December 2002 that ‘we do not know whether or not [Iraq] has a nuclear weapon’. This uncertainty about what the United States really believed led Hussein to refuse inspections and play tough, in the same way that A in the model above makes a large offer to B.Footnote 17
Admittedly, the lead-up to the Iraq War of 2003 was a far more complex negotiation than the simple take-it-or-leave-it protocol can represent. It involved multiple players, and for Hussein a difficult balancing act of convincing the United States that he was not a danger, while making sure that Iran would think he was. Moreover, the negotiations were not just about a territory, but also about capabilities and knowledge, since what the US coalition was asking was to curtail Iraq’s power, and to make her existing capabilities common knowledge by allowing inspections.Footnote 18 Nevertheless, the dynamic between the three countries illustrates some of the incentives and strategies uncovered in the simple, ideal-type game presented above. Other examples below further illustrate the model.
Reaching Common Knowledge
The model above showed that, to avoid war, countries must bridge their perceptions not only of each other’s capabilities and resolve, but also of what each knows. Narrowing this gap, however, may prove more difficult than in the case of first-order asymmetric information. First, the need for higher-order information means that states not only need to send a message, but also to ensure that the message is received and correctly understood. A message–a ‘confirmation’–is therefore also needed, which is itself subject to further miscommunication and misperception. A complication, however, is that states often have an incentive to misrepresent what they know. Finally, I discuss ways in which states may overcome some of these difficulties and create mechanisms and institutions to create common knowledge.
Learning what the other knows
States acquire information about each other in two main ways: intelligence and signalling. Both can fail, however, and the mere possibility of these failures leads countries to always wonder: was my signal misunderstood? Did the other respond in that way because he does not know or because he knows and still chooses to make that offer? And what has he discovered about me?
Intelligence
Intelligence involves the use of sensors, spies or the decryption of messages to collect bits of information that may then be assembled into a coherent whole.Footnote 19 Intelligence often fails, however. Human intelligence relies on interrogations and conversations with those who have information, but may fail due to manipulation, false information being fed to the agent, or a simple inability to penetrate the centre of power (Ferris Reference Ferris2007). Other sources of failure include cognitive biases or organizational problems (Garthoff Reference Garthoff2015; Shore Reference Shore2005). Signals and imagery intelligence, which consist of information collected using radars, sonars or satellites, is limited to what is visible and tangible. Foliage alone may be sufficient to thwart the efforts of a drone or satellite. Finally, open source intelligence relies on the analysis of journals, radio and television, but is doubly limited: first by the information available to the other country’s media, and second by possible manipulation.
Because these limitations of intelligence gathering also apply to the opponent, knowing how much the adversary knows is difficult. Has the other discovered my secret nuclear programme, or the absence thereof? Are spies providing information to the opponent? What is made of that information? And does the other believe it? Intelligence may also be manipulated to feed false information to the enemy. But did they believe that information? And did they think that I thought they believed it? The mere possibility of intelligence failures, in short, is sufficient to create uncertainty about what the other knows.
Diplomats spend considerable efforts analysing what their counterparts think, their perception of their own understanding, and so on. A comprehensive review of pre-World War I British and French diplomatic cables thus reveals that their concern was often to establish facts not about capabilities or resolve themselves, but rather about the other’s thoughts, beliefs and estimates.Footnote 20 For example a 1913 cable from Berlin reports, ‘It is thought, or pretended to be thought, in some German circles connected with the Navy, that England’s political affairs would render it difficult for any British Government to justify Naval increases’.Footnote 21
The July 1914 crisis illustrates in a dramatic way the importance for diplomats of knowing what the other knows. Immediately after the assassination of Franz-Ferdinand, Austria asked Germany for her support in the Serbian crisis. In particular, Germany was informed of the demands that Austria would make from Serbia, and that these demands were purposefully harsh to elicit Serbia’s refusal.Footnote 22 Wilhelm II, the German emperor, responded that ‘Germany would support the Monarchy through thick and thin’.
Knowledge of Germany’s knowledge of Austria’s plan was crucial. If other capitals had known that Germany knew it, and yet had not pressured Austria to exercise restraint, then her ‘revisionist’ intentions would have been clearer. Thus, in a confidential cable to Foreign Secretary Edward Grey, the British ambassador notes: ‘France and Britain are asking Germany to moderate Austria, but her intentions are unclear. They depend for example on what Germany knew’.Footnote 23 This knowledge of Germany’s information was critical, as it would have facilitated Britain’s choice to enter the war, and possibly would have deterred Germany in the first place.
Furthermore, Germany knew of Britain’s ignorance and worked hard to make sure that British leaders would remain uninformed of Germany’s knowledge: ‘[German] Secretary of State again denied that he had had any previous knowledge of terms of Austro-Hungarian note’;Footnote 24 ‘German Ambassador read me a telegram from German Foreign Office saying that Germany had not known beforehand and had had no more than other Powers to do with the stiff terms of Austrian note to Serbia’ and ‘Secretary of State again repeated very earnestly that he had had no previous knowledge of contents of Austro-Hungarian note, although he had been accused of knowing all about it’.Footnote 25 The French also did not know: ‘M. Jules Cambon asked M. v. Jagow [note: the German Secretary of State] what were the terms of the Austrian note. The latter replied that he did not know’.Footnote 26 At the same time, European diplomats tried hard to understand how much Germany really knew: ‘I am privately informed that German Ambassador knew text of Austrian note to Servia before it was sent off and telegraphed it to the German Emperor, but I am not able to verify this, though I know from German Ambassador himself that he endorses every line of it’.Footnote 27
A similar issue arose between Austria and Russia. On 17 July, Austria-Hungarian ambassador Szapáry ‘expressed a desire to see Sazonov [Russian Foreign Minister] as soon as possible’ (Schilling Reference Schilling1925, 26). As McMeekin (Reference McMeekin2013, 133–4) puts it, ‘Berchtold [Foreign Minister of Austria-Hungary] wished to find out whether the Russians knew what he was up to’. In other words, Austria wanted to know how much Russia knew (second-order uncertainty). Furthermore, the Russians also wanted to ensure that the Austrians did not know that they knew (third-order uncertainty): ‘Just as Berchtold wanted to be sure the Russians did not know what he was up to, so did the latter not want the Austrians to know that they were cottoning to the game’ (McMeekin 1013, 134). As a result, Austria believed that it could finally draft the 48-hour ultimatum to Serbia since, ‘as far as Berchtold knew [...], Austria had kept the other powers entirely in the dark [...]’. In fact, Berchtold continued to take on elaborate cover stories to ensure the secret of his planned ultimatum, unaware that the secret had been revealed to most European leaders by then.
Signalling
States can also volunteer information by sending various forms of signals. A large literature in international relations has focused on this strategic exchange of information (Fearon Reference Fearon1997; Kydd Reference Kydd2005; Schelling Reference Schelling1980; Trager Reference Trager2010), including the use of audience costs (Fearon Reference Fearon1994, but see also Snyder and Borghard Reference Snyder and Borghard2011), or even cheap talk (Crawford and Sobel Reference Crawford and Sobel1982; Farrell and Gibbons Reference Farrell and Gibbons1989; Sartori Reference Sartori2002; Sartori Reference Sartori2013; Trager Reference Trager2010). These messages aim to convey information about capabilities and resolve, but also beliefs about those of the other. Yet, signals also often fail. They may fail to be detected or may be interpreted as informative signals (Jervis Reference Jervis2002; Mercer Reference Mercer2010) and may be affected by a variety of biases (Holsti Reference Holsti1962; Jervis Reference Jervis2015; Yarhi-Milo Reference Yarhi-Milo2013). These well-known challenges associated with conveying information are amplified when dealing with common knowledge, because of the need for higher-order levels of information. A signal need not only be sent; confirmation of its reception and interpretation must also be conveyed back to the sender. The recipient might even have confirmed receipt, but not know whether the sender has received that receipt. This problem, known as the coordinated attack problem, illustrates one of the difficulties associated with reaching common knowledge.Footnote 28 There is also evidence of our cognitive limitations in dealing with higher-order beliefs (Nagel Reference Nagel1995). In particular, higher levels of the thought process are associated with more noise and suboptimal choices (Kübler and Weizsäcker Reference Kübler and Weizsäcker2004).
To summarize, the transmission of information often fails because of signal misinterpretation or intelligence failures. The very fallibility of these systems, or even the possibility that they might fail, means that states can hardly ever be certain of what their opponent knows, and common knowledge of information partitions is therefore unlikely. In addition, because the perception of what states know matters, states might not be willing to convey their private information about what they know, even if they could. Indeed, states are likely to be strategic with respect to the information they release about what they know of the adversary. Just as with first-order incomplete information, states have an incentive to misrepresent their higher-order private information.
At the beginning of the Cuban missile crisis, for example, President Kennedy did not want the Soviet Union to know what he knew about the missiles. When he met with Soviet Foreign Minister Andrei Gromyko on 18th October, he was assured that all Soviet assistance to Havana was only ‘pursued solely for the purpose of contributing to the defense capabilities of Cuba’.Footnote 29 Kennedy did not then inform him of his knowledge. He continued to avoid any disruption to his public schedule to avoid suspicions that he might know. Later, once the Navy began to enforce the quarantine, however, communications began to be purposefully sent uncoded. The point then was to ensure that the Soviets would not misperceive their intentions, as well as to ensure their knowledge of the Americans’ knowledge.
Creating common knowledge
There are, however, ways in which states can convey information about attributes such as their military capabilities so that they become common knowledge. They may, for instance, choose to reveal them publicly. Displays of artillery on national commemoration days, for example, or heavily publicized tests of new weapons thus not only signal strength, but also ensure that these capabilities are common knowledge. Joint military manoeuvres may also serve the same purpose: by granting the other a more intimate access to her resources, military capabilities become common knowledge.Footnote 30 Similarly, states may want to publicly convey their knowledge of the other’s capabilities. They may, for example, mention the precise location of a nuclear site to convey their information in such a way that it becomes common knowledge. In the Cuban missile crisis, for example, satellite photos were used to show the United States’ knowledge of the hidden ballistic missiles on Cuban soil.
Far more difficult, however, is to generate common knowledge of the absence of certain capabilities. Conveying positive knowledge–for example using public displays of force–is far easier than conveying negative knowledge–showing that certain capabilities do not exist. Saddam Hussein thus struggled to prove the absence of a WMD programme, whereas the Bush administration could easily pretend it believed Iraq had one. The willingness to co-operate with inspectors was a key signal in that regard, but came at a cost as the inspectors had to be trusted to be impartial and not to divulge other strategic information. Similarly, while public displays of force can easily create common knowledge that I have at least x units, showing that I have no more than z units is far more difficult. Public announcements are only helpful to reveal positive things that only a knowledgeable party would know.
Conclusion
The role of information in bargaining outcomes has been a central theme in existing research on interstate conflict. However, models that incorporate asymmetric information have typically limited their attention to private information about fundamentals–first-order incomplete information–while assuming common knowledge of information partitions. Yet, assuming that states know whether their opponent knows the distribution of power or costs is implausible. The very uncertainty that is inherent to information collection–whether it be through intelligence or signalling–implies that states can rarely be certain of what their adversary managed to infer about them. States, in other words, are almost always uncertain of each other’s uncertainty.
The importance of relaxing the assumption of common knowledge about information itself was illustrated using a simple bargaining model in which all equilibria included a positive probability of war. Because A thought that B might think she was strong, A had an incentive to bluff and demand large concessions from B. With positive probability, however, B was well aware of A’s weakness, and hence rejected the offer in favour of war. The results have important implications for our understanding of the conditions under which war can emerge, for the way in which we should model uncertainty, and for our empirical approach to testing the role of uncertainty in international negotiations.
The importance of higher-order uncertainty also implies that states might behave strategically with regards to how they release or hide information about what they know of the enemy (Prunckun Reference Prunckun2012, Ch. 8). Classified information, for example, often hides not only what you have, but also what you know about the other. Similarly, credible signalling is not limited to conveying information about your capabilities and resolve, but also includes informing (or not) the other about what you know of him. Just as in the case of first-order intelligence, states have an incentive to misrepresent their beliefs about the other and to deceive them about what they know and what they do not know. Depending on their needs, they might exaggerate their perception of the other’s capabilities, or downplay them.
The results also have important empirical implications. Inferences about the role of private information may be incorrect if they ignore higher-order information–whether they rely on historical case studies or large-N analyses. Two otherwise identical cases with the same first-order information structure may result in opposite outcomes depending on higher-order beliefs. For example, before launching the surprise attack that would start the Yom Kippur War, Egypt’s Sadat sent scouts across the border to ensure that the Israelis had not discovered the pre-emptive strike they were planning–that is, to ensure that they indeed did not know what he knew. But consider the hypothetical case in which Sadat had erroneously thought that Israel knew about the attack, and hence cancelled his surprise attack. In that case the inference using data on first-order information only (that is, that Israel did not know) would have been that incomplete information does not correlate with war onset. Only by incorporating second-order information would the correct inference have been drawn.Footnote 31
Explicitly modelling higher-order uncertainty leads to a more refined understanding of strategic interactions, with potentially important consequences for the efficiency of bargaining. If states need to know what each other knows, then the conditions for peace are more stringent than the simple exchange of information about first-order attributes such as capabilities or resolve. More generally, modelling larger information structures may lead to a deeper understanding of states’ strategies and more a subtle understanding of their communication. Future research will need to focus on the mechanisms states use to address the problems associated with higher-order uncertainty. International organizations, mediation and monitoring, in particular, may all play a significant role in the creation of common knowledge.
Acknowledgements
I thank Constantine Boussalis, Gail McElroy, James Morrow, William Phelan, Anne Sartori, Peter Stone, Richard Van Weelden, George Yin and anonymous reviewers for helpful comments. Fiona May and Christian Oswald provided exceptional research assistance.
Appendix
Proofs
Proof of Lemma 1 (no separating equilibrium). Consider first the possibility of a separating equilibrium in which A offers x L upon observing ({w, k} ∨ {w, nk}) but x H upon observing ({s, nk} ∨ {s, k}). B’s posterior probabilities q i of being at state s i upon observing offer x k and information set (θ) are then:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU1.png?pub-status=live)
First note that if there is to be a peaceful equilibrium, then it must be that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU2.png?pub-status=live)
Let x L = p w−c+α and x H = p s−c+β, where 0≤α, β≤2c. Clearly, given her beliefs, B always accepts A’s offer. However, note that A w’s utility from playing (x L, x H) is p w−c+α. So A w will deviate to an offer x H instead of x L upon observing ({w, k} ∨ {w, nk}) if
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU3.png?pub-status=live)
But we assumed that $p^{s} {\minus}p^{w} \,\gt\,{{2c\left( {p_{1} {\plus}p_{2} } \right)} \over {p_{2} }}$, which is clearly greater than or equal to ${{\alpha \left( {p_{1} {\plus}p_{2} } \right)} \over {p_{2} }}{\minus}\beta$
for any 0≤α, β≤2c. This means that A will deviate (regardless of B’s beliefs off the equilibrium path), and hence that there can be no separating equilibrium.
Proof of Lemma 2 (pooling equilibrium). Clearly there can be no pooling equilibrium in which both types of A offer x*<p s−c, as a strong A would prefer war, and hence deviate. So consider a pooling equilibrium in which A offers x*=p s−c. In a pooling equilibrium, B’s posteriors are dictated by Bayes’ rule:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU4.png?pub-status=live)
B’s beliefs off the equilibrium path must be such that she assumes from any offer x′ ≠ x* that she is facing a weak A. If not, then at least A w will want to deviate from offering x*, and there can hence be no equilibrium in which both types of A offer x*.
Consider now what B will do upon observing x*. At ({w, k}), B will reject because x*=p s−c>p w+c. At ({s, k}), B always accepts. Finally at ({w, nk} ∨ {s, nk}), B accepts x*=p s−c if the utility of accepting is greater than the utility of fighting weighted by the probability of the different types of A, i.e., if
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU5.png?pub-status=live)
which has been assumed.
Does A have an incentive to deviate from this strategy? Clearly, a strong A will not deviate, given B’s off the equilibrium path beliefs. But would a weak A deviate? A w would deviate to offering p w+c (any higher offer will be rejected, and a lower offer is clearly not rational) if:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU6.png?pub-status=live)
But $p^{s} {\minus}p^{w} \,\gt\,{{2c\left( {p_{1} {\plus}p_{2} } \right)} \over {p_{2} }}$ was assumed, so A would not deviate and hence this constitutes an equilibrium.
Proof of Proposition 1. The proof follows directly from Lemmas 1 and 2. To show that war occurs with positive probability in every PBE, simply replace x*=p s−c in the proof of lemma 2 with x*=p s−c+γ, where γ ∈ [0, 2c] is such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200907133442407-0297:S0007123418000261:S0007123418000261_eqnU7.png?pub-status=live)