Hostname: page-component-6bf8c574d5-7jkgd Total loading time: 0 Render date: 2025-02-22T17:11:55.743Z Has data issue: false hasContentIssue false

Rationality and Psychology in International Politics

Published online by Cambridge University Press:  15 February 2005

Jonathan Mercer
Affiliation:
Jonathan Mercer is Associate Professor of Political Science at the University of Washington, Seattle. He can be reached at mercer@u.washington.edu.

Abstract

The ubiquitous yet inaccurate belief in international relations scholarship that cognitive biases and emotion cause only mistakes distorts the field's understanding of the relationship between rationality and psychology in three ways. If psychology explains only mistakes (or deviations from rationality), then (1) rationality must be free of psychology; (2) psychological explanations require rational baselines; and (3) psychology cannot explain accurate judgments. This view of the relationship between rationality and psychology is coherent and logical, but wrong. Although undermining one of these three beliefs is sufficient to undermine the others, I address each belief—or myth—in turn. The point is not that psychological models should replace rational models, but that no single approach has a lock on understanding rationality. In some important contexts (such as in strategic choice) or when using certain concepts (such as trust, identity, justice, or reputation), an explicitly psychological approach to rationality may beat a rationalist one.I thank Deborah Avant, James Caporaso, James Davis, Bryan Jones, Margaret Levi, Peter Liberman, Lisa Martin, Susan Peterson, Jason Scheideman, Jack Snyder, Michael Taylor, two anonymous reviewers, and especially Robert Jervis and Elizabeth Kier for their thoughtful comments and critiques. Jason Scheideman also helped with research assistance.

Type
Research Article
Copyright
© 2005 The IO Foundation and Cambridge University Press

Rational choice theorists and political psychologists agree that psychology explains only deviations from rationality. The primary task of political scientists, according to Downs, is to study “rational political behavior, not psychology or the psychology of political behavior.”1

Downs 1957, 9–10.

According to this view, rational explanations should avoid psychology because psychology explains only mistakes. As Satz and Ferejohn note, “mental entities need not figure in the best rational-choice explanations of human action.”2 Harsanyi suggests that when behavior appears to be irrational, then as a last resort analysts might turn to “specific assumptions about the psychological mechanisms underlying human behavior.”3 Yet even in this case, psychology can help only after the establishment of a rational baseline.4 Knowing irrational behavior depends on knowing rational behavior. For example, psychology can explain wars that result from misperceptions, but only after the development of a rational model.5 Political psychologists agree that psychology addresses irrational processes that result in mistaken judgments. For example, Lebow has shown how emotion and cognition subvert rational deterrence, McDermott has shown how systematic cognitive biases encourage policymakers to make imprudent gambles, and Jervis has shown how cognitive biases cause misperceptions that undermine (or inadvertently support) policy.6

See Lebow 1981 and 1987; McDermott 1998; and Jervis [1970] 1989, 1976, and 1989.

In each case, psychology explains deviations from rationality.

The ubiquitous yet inaccurate belief in international relations scholarship that cognitive biases and emotion cause only mistakes distorts the field's understanding of the relationship between rationality and psychology in three ways. First, the belief that psychology explains mistakes (or deviations from rationality) has led to the belief that rationality itself must be free of psychology. Second, the belief that psychology explains mistakes has led international relations theorists to believe that psychological explanations need rational baselines. Finally, the belief that psychology explains mistakes has led theorists to believe that psychology cannot explain accurate judgments. Each belief depends on the other. For example, rationality must be free of psychology if psychology explains only deviations from rationality; analysts can only know what constitutes a deviation from rationality after establishing a rational baseline; they can only insist on establishing a rational baseline to assess the role of psychology if that baseline is itself free of psychology. This view of the relationship between rationality and psychology is coherent and logical, but wrong. Although undermining one of these three beliefs is sufficient to undermine the others, I address each belief—or myth—in turn.

Rationality does not depend on psychology. This myth allows rationalists to imagine rational behavior independent of the mind, which is consistent with viewing psychology as explaining mistakes. After all, one cannot rely on mental phenomena as a key to rationality if mental phenomena are a source of mistakes. If rational models rely on psychology, then evidently psychology does more than explain mistakes. The first section of this article examines how political scientists came to believe that rationality is free of psychology. This belief, commonly held by rational choice theorists, can best be understood as part of an intellectual tradition in psychology and in economics that sought to eliminate all mental phenomena from explanations of human behavior. More puzzling is why political psychologists accept as their task explaining why people slip off rational baselines. I discuss a different intellectual tradition in psychology to account for political psychologists' belief that psychology explains only mistakes.

Psychology requires a rational baseline. As Elster noted, rational choice is above all else a normative theory.7

It explains how one should reason in order to be rational. Analysts can only know what is not rational—the domain of psychology—after establishing what is rational. Demanding the existence of a rational baseline before considering a psychological explanation makes sense if that baseline is free of psychology and if sthat baseline provides the best means to a given end. The second section of this article critiques this rational baseline argument. I use rational reputation models to illustrate the difficulty of excluding psychology from psychological concepts and the danger of relying on unacknowledged psychological assumptions.

Psychology cannot explain accurate judgments. Although recognizing the first two beliefs are myths means psychology does more than explain mistakes, I address the “psychology of mistakes” myth by focusing on emotion. The last section discusses how emotion is necessary to both rationality and to basic international relations concepts such as trust, identity, and justice. For example, I use the emotion in trust to solve collective action problems. Rethinking the relationship between rationality and psychology should help rationalists overcome what one economist describes as an “aggressive uncuriosity” about psychology,8

and it should help political psychologists overcome their disinterest in accurate judgments.

Rationality Depends on Psychology

The belief that psychology explains mistakes implies that rationality is free of psychology: because rational models cannot be built on an irrational (or psychological) foundation, each must be distinct from the other. I argue that rational models are not distinct from, but rather depend on, psychological assumptions. I use Elster's definition of rationality as using the best means to achieve a given end.9

I use accurate outcomes (or judgments) to mean obtaining a given end and mistaken outcomes to mean not obtaining a given end. I also distinguish “process” from “outcome,” which makes it possible for nonrational (or nonnormative) decision-making processes to yield accurate judgments.10 In other words, just as a psychological process can result in either accurate or inaccurate judgments, a rational process can do so as well. Not only can actors make mistakes by failing to adhere to a rational process, but analysts can also make mistakes (or cause policymakers to make mistakes) by presenting as rational a process that actually undermines rationality.11

To avoid the terminological confusion that may result from the observation that psychological processes can be rational and that rational (choice) processes can be irrational, I adhere to the conventional view that models, processes, and baselines are rational to the extent that they conform to a rational choice (or “coherence”) standard of rationality. When necessary, I substitute “rationalist” for “rational choice” to distinguish it from “rational.”

Rational choice theorists and political psychologists can agree on Elster's definition of rationality while disagreeing over how to determine which “means to a given end” are best. After establishing two approaches to rationality—a coherence approach advanced by rationalists and a correspondence approach advanced by psychologists—I address why both rational choice theorists and political scientists who use psychology view rationality as free of psychology.12

For exceptions, see Sears, Huddy, and Jervis 2003, 10–12; and Bueno de Mesquita and McDermott 2004. I do not address the relationship between constructivism, psychology, and rationality. The constructivist focuses on the emergence and role of ideational structures and typically relies on the “logic of appropriateness” instead of the “logic of consequence” (or instrumental rationality) to explain behavior. For a discussion of both logics, as well as of strategic social construction, see Finnemore and Sikkink 1998. For a review of psychology and constructivism, see Goldgeier and Tetlock 2001.

Competing Approaches to Rationality

Rationalists and empiricists debated epistemology in the seventeenth, eighteenth, and nineteenth centuries. Rationalists were committed to deductive rigor, to logic, to mathematical proofs; empiricists were committed to direct experience. As summarized by Robinson, a psychologist and historian of psychological ideas: “If the empiricist traditionally has been content to discover what is, the rationalist considers what must be. The empiricist's evidence has always been the data of experience; the rationalist's, the necessary proofs following from axioms and propositions.”13

The empiricists won the debate in psychology and the rationalists won the debate in economics. Economists have attempted to exclude psychology and its empirical tradition from their discipline and embraced the certainty of logic. This epistemological debate, which I portray as one between rationalists and psychologists, led to different approaches to rationality.

Rationalists view coherence as necessary to rational judgment. The coherence view of rationality rests on well-established norms about how one should make decisions, where a good decision is one that rests on logic and adheres to the principles of statistical inference. An incoherent judgment is one that, for example, allows the framing of information to influence judgment, incorrectly assesses probability, or violates the axioms of utility theory. Rational judgments must be free of internal contradictions.14

Dawes 1998, 497.

A coherence model of rationality implies a relationship between how one makes decisions and the desirability (or utility) of the outcome. Rational choice theories explain how one should reason, not how one actually reasons. The closer one adheres to the model, the more rational one's judgment; the more one deviates from the model, the more incoherent and thus irrational one's judgment. A perfectly rational process can yield an undesired outcome, such as in a prisoners' dilemma (PD), but this will be the product of the situation, not a property of the individual's judgmental process. Rational choice theory is primarily normative, not positive, for it explains how one should make decisions to reach a given outcome. Violation of these rules and standards means a process is incoherent, and rationalists assume that an incoherent process is likely to result in undesired outcomes. By assuming that people want to be rational, rationalists bridge the normative to the positive and make the approach testable.15

Colman 2003, 142. Stein discusses the tension between normative and positive explanations. Stein 1999.

A correspondence approach to rationality, which dominates in psychology, is both positive and nonnormative. It is positive as it concerns itself above all with actual decisions. It is nonnormative because it is agnostic on how one should reason: it does not advance rules or procedures but seeks to understand how people reason. Hammond, a psychologist, suggests “correspondence theorists are interested in the way the mind works in relation to the way the world works, while coherence theorists are interested in the way the mind works in relation to the way it ought to work.”16

Hammond 1996, 106. See also Colman 2003, 142.

Correspondence theorists study accurate as well as mistaken judgments. They investigate how people routinely make good decisions even though their brains are not loaded with statistical software. The mind can solve some problems better than others and typically relies on different heuristics in particular situations to help with decisions.17

Whereas rationalists emphasize how a rational process, ceteris paribus, produces accurate judgments, psychologists emphasize how nonnormative processes can result in accurate judgments. Thus the key difference between rationalists and psychologists is not over what they explain, but over how they explain it. Rationalists rely on deduction, statistics, and probability theory, whereas psychologists rely on induction and description of how the mind actually works. Neither focuses primarily on mistakes, though each emphasizes a different way of obtaining the same desired outcome.

Rationalists and Psychology

Behaviorists and neoclassical economists attempted to eliminate the “mind” from causal explanations of human behavior. It was not that mental phenomena caused mistakes, but that it was impossible to know what role mental phenomena played. Behaviorists and economists rejected the metaphysical in favor of the observable and material. They aimed to overthrow “folk psychology” and replace it with science. Although behaviorists did not address rationality directly, their attempt to substitute observable stimuli for mental phenomena is central to contemporary rational choice theory. Methodological reasons also drove economists to eliminate mental phenomena from their explanations. Their apparent success, coupled with a coherence epistemology, led them to believe that psychology is useful only to explain deviations from rationality. These two traditions explain rational choice theorists' belief that psychology explains mistakes.

Eliminating the mind from explanations of behavior required overthrowing folk psychology. The folk psychology of human action, also known as intuitive psychology, is the commonsense belief that one's desires and beliefs explain one's actions.18

For example, why did Bob go to the opera? Because he desires to hear an opera and believes he will hear one at the opera house. Using Bob's beliefs and desires inferred from his trip to the opera to explain that trip is teleological: the behavior explains the purpose for that behavior. If Bob desires to hear jazz, but goes to the opera, then analysts must either view him as irrational or “fix” Bob's desires and beliefs to make them consistent with his action. Because beliefs and desires constitute action and cause action, folk psychology is unfalsifiable. The theory has a few other problems. For example, it is indeterminate (without numerous ceteris paribus conditions) and it hinges on access to mental states that are difficult to know. These (and other) problems make folk psychology problematic and unscientific. Understandably, behaviorists, economists, and rational choice theorists sought to replace folk psychology with a causal theory of human action.

Behaviorism emerged in the early twentieth century as a response to the problematic and unscientific nature of folk psychology.19

In contrast, behavioralism emerged in the 1950s from the efforts of political scientists to apply scientific methods to a range of issues, including cognition and affect. See Merkl 1969; and Dahl 1969.

Behaviorists study behavior, not the mind. People respond to incentives, and to understand incentives is to understand behavior. In what became the behaviorist manifesto, Watson declared: “Psychology as the behaviorist views it is a purely objective experimental branch of natural science. Its theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its methods, nor is the scientific value of its data dependent on the readiness with which they lend themselves to interpretation in terms of consciousness.”20 Psychologists should study behavior, not mental processes. As Skinner observed: “We do not need to try to discover what personalities, states of mind, feelings, traits of character, plans, purposes, intentions, or other perquisites of autonomous man really are in order to get on with a scientific analysis of behavior.”21

Skinner 1972, 12–13.

Analysts do not need to study personality because, as Skinner discovered, “Thinking is behaving. The mistake is in allocating the behavior to the mind.”22 Behaviorists accepted that private mental states exist, but rejected their causal power.23

See Hayes et al. 1999, 163; and Moore 1995, 81.

Despite behaviorism's thirty-year dominance in psychology, it failed to liberate psychology from the mind.24

For behaviorism's dominance, see Abelson 1994; and Hunt 1993.

One can teach a food-deprived pigeon a lot—behaviorists recently trained pigeons to discriminate between Monet and Picasso—and pigeon behavior is easy to predict in controlled settings.25 However, animals do more than respond to incentives. Chimpanzees demonstrate what Gestalt psychologists called “insight” by using sticks to create tools, and boxes to create ladders, before obtaining a reward.26

Hunt 1993, 294–95.

Even a rat's behavior in a maze exhibits variations that require the introduction of the rat's mind.27

Edward Tolman discovered what he called “purposive behaviorism” and demonstrated that rats really do think. Innis 1999.

Behaviorists believed they had discovered a universal law of behavior: “Pigeon, rat, monkey, which is which? It doesn't matter.”28 But it matters enormously whether one is talking about rats or about chimpanzees, let alone about humans. For example, the human need to justify past behavior leads to thoughts, feelings, and choices inconsistent with behaviorist expectations: the more unpleasant an initiation ritual to join a group, the better liked the group.29

Behaviorists thought they eliminated the mind from their explanations, for they focused on what they imagined to be law-like relationships between stimulus and effect. Animals respond to incentives, such as corn pellets or cash rewards, and this allows analysts to explain and predict behavior without reference to mental processes. However, as Chomsky observed, analysts cannot identify a stimulus without first identifying a response, in which case a stimulus is not a property of the environment but of the individual's beliefs and desires. This observation makes clear, as Chomsky notes, “that the talk of ‘stimulus control’ simply disguises a complete retreat to mentalistic psychology.”30

Rather than help analysts escape from psychology, stimulus-and-response approaches depend on understanding an actor's mental state.

Even if researchers control the environment and provide only one stimulus, how subjects respond to that stimulus depends not on its physical attributes but on the subjective understanding (or construal) of the stimulus.31

For construal, see Aronson et al. 1994, 16. For the importance of subjective understanding, see Mischel 1999; Ross and Nisbett 1991; Ross 1990; and Jervis 1976.

Researchers can reliably predict that a food-deprived chicken will respond to a lever that gives it food, or that a person will respond to a $10 bill on the ground by picking it up, but prediction becomes unreliable in slightly more complex settings. How students respond to very low (but passing) grades differs dramatically: some students work harder, some blame the exam, and some pump their fist and say “Yes!” Despite behaviorists' attempts to rely only on behavior, they nonetheless relied on folk psychology. Behaviorists eliminated from their explanations neither desires (I want that corn pellet) nor beliefs (at the end of the third maze on the left is food). Without knowing desires and beliefs, one cannot know what “works” as an incentive.

As with their colleagues in psychology, economists coveted scientific progress, which meant developing a causal theory of human behavior. Although marginal utility theory began by expressing in mathematical terms Bentham's view that the source of all human behavior is avoiding pain and seeking pleasure, economists believed that measuring hedonic states—my pleasure in consumption is two and a half times your pleasure—was an illusion.32

See Kahneman 2000; Lewin 1996, 1297–1308; and Rosenberg 1988, 66–68. Psychologists now study and measure hedonic notions of utility. See Mercer forthcoming.

Progress required abandoning the metaphysical and focusing on the material and observable. Switching to a simple ranking of preferences (ordinal utility) did not address the problem; economists continued to assume individual cardinal utility, which requires introspection to assign precise numbers to experienced utility. Samuelson's introduction of revealed preference theory appeared to solve the psychological problem.33 Instead of ranking preferences to explain behavior, Samuelson used behavior to rank preferences. Like behaviorists, economists denied the causal power of the mind without denying the existence of the mind.

Inferring preferences from behavior resulted in empirical and theoretical problems. Empirically, economists have found so little support for the axiom of revealed preference that it is “empirically useless.”34

Lewin 1996, 1316. See also Sen 1973, 243.

Theoretically, reliance on revealed preferences meant that economists abandoned causal theory. Revealed preferences explain action by inferring preferences from that action, just as folk psychology explains action by inferring desires and beliefs from that action.35 Both are teleological arguments. The problems of revealed preferences encouraged economists to junk both behaviorism and any claim about the psychological plausibility of their assumptions.36

Lewin 1996, 1318.

Whereas behaviorists believed that relying on mental phenomena was a mistake, economists believed that mental phenomena caused mistakes. Because psychology explains deviations from rationality, and given the assumption that psychological biases are idiosyncratic rather than systematic, psychology becomes unimportant with a large enough sample. Markets are free of psychology even if individuals are not. Friedman's famous “as if” assumption gave economists license to stop worrying about psychology and learn to love aggregate data.37 When explaining market behavior, psychology happily, perfectly, and conveniently washes out.38

See Rabin 2002, 678–79; and Rosenberg 1988, 75.

The emergence of behavioral (or psychological) economics reinforces the view that psychology explains mistakes. Behavioral economists attack neoclassical approaches for ignoring psychology, and typically reintroduce psychology to explain irrational behavior.39

See Shiller 2000; and Tirole 2002. For an exception, see Rabin and Thaler 2001.

The market's imperfections reflect individuals' psychological limitations.

The rational actor approach of microeconomics grounds rational choice theory.40

Like behaviorists and neoclassical economists, Satz and Ferejohn accept the existence of psychological states but believe these states are unnecessary to explain human action; rational choice theory has a “remote connection to psychology.”41 If Skinner is correct that “thinking is behaving,” then as Riker argues, analysts can infer preferences from behavior: “Choice reveals preference because one can infer backwards from an outcome and a consistent process to what the goals must have been to get to that outcome.”42

See Skinner 1974, 104; and Riker 1990, 174.

Intransitive preferences indicate either irrationality or a change in taste. In other words, analysts infer desires and beliefs from action. When those desires and beliefs are inconsistent with action, then the actor is either irrational or the actor has changed her desires. As Sen points out, the claim that revealed preferences explain behavior without reference to psychological assumptions is “pure rhetoric.”43

Sen 1973, 243–44.

Revealed preferences rely on the folk psychology of human action, as does expected utility theory. Given an actor's choices, analysts must either know beliefs about probability to determine utilities, or know utilities to determine beliefs about probabilities.44

Rosenberg 1988, 73–74.

Just as desires and beliefs explain and constitute action, analysts can measure probability beliefs or utilities only by assuming the validity of expected utility theory.45

Ibid.

Expected utility theory depends on the assumption that desires and beliefs explain actions. It depends on folk psychology.

Because rationalists reject the mind as causing behavior, the actor's environment carries the explanatory burden. Hardin advises that analysts should study the incentives that cause ethnic conflict, not the psychology of the participants.46

Hardin 1995, 8–9.

Achen and Snidal note that rational deterrence theory never refers to mental calculations because decision makers need not calculate: “If they simply respond to incentives in certain natural ways, their behavior will be describable by utility functions.”47 Incentives explain the behavior of actors engaged in ethnic cleansing, the behavior of decision makers in a nuclear crisis, or even, as Satz and Ferejohn note, the behavior of pigeons in a Skinner box: “Studies have shown that pigeons do reasonably well in conforming to the axioms of rational-choice theory.”48 Whether rational choice explains pigeon behavior, or vice versa, is beside the point.49

Operant conditioning provides many of the same basic predictions as microeconomics; see Alhadeff 1982.

Even if people are pigeons and incentives explain behavior, analysts cannot know what counts as an incentive without reference to the beliefs and desires of the animal. Adults, children, and pigeons respond differently to the opportunity to drink scotch, watch cartoons, or eat a corn pellet. Even if it were true that all humans are pleasure seekers and pain avoiders, this simple hedonic calculus restates the problem but does not solve it, unless one believes that humans have identical views of pain and pleasure. Behaviorists are still responding to Chomsky's observation that one's beliefs and desires determine what one views as an incentive.50

Rational deterrence rests on the manipulation of incentives, which are properties of individuals, not environments. Thus what is rational for me to do depends on what inferences I believe you will make from my actions, and this cannot be specified in the abstract. How you respond to my threats or promises depends on your image of me and of yourself.51

Treating beliefs as exogenous can be sensible but does not solve the problem of inference: What will you infer from my behavior? Relying on Bayesian updating works well when the new information is objective and subject to one interpretation: I'll recognize a poker chip as red even if I was expecting to see a green chip and will update my belief about the probable distribution of red and green chips in a bag. However, in most cases that interest students of international politics, new information is ambiguous, subject to multiple interpretations, and interpreted through one's beliefs, expectations, and desires. As Jervis has emphasized, people do not simply revise their preexisting beliefs based on new evidence, but they use their preexisting beliefs to interpret that evidence.52 Unless one is in the lab predicting the color of poker chips in a bag, to make a rational choice one needs to understand beliefs, desires, and their influence on the interpretation of evidence.

The problems with folk psychology led behaviorists, neoclassical economists, and rational choice theorists to rely on logic, deduction, and observable behavior. Without denying the existence of the mind, scholars in these traditions view the mind as exogenous to the study of rationality. Yet incentives depend on beliefs and desires, revealed preferences cannot be causal, and making preferences (or desires) exogenous still requires knowledge of beliefs and action to assess rationality. Analysts cannot hope to exclude the mind from rationality if their explanations of rational behavior depend on the mind. Eliminating the mind from one's explanations would be an enormous accomplishment; inserting the mind by assumption is not. Of course, not all approaches to rationality depend on folk psychology. Often statistics or probability theory provides correct (or rational) answers to problems. As I discuss in the next section, this approach to rationality dominates much of the social psychology literature on judgment and helps to explain why political psychologists, like rationalists, believe that psychology explains mistakes.

Political Psychology and Mistakes

Whereas neoclassical economics still dominates economics departments, the 1960s' cognitive revolution overthrew behaviorism in psychology and reintroduced the mind as key to explaining not behavior, but mistakes. Bruner introduced cognition to psychology with his research on how motivation, such as expectations or desires, influence perceptions.53

This research led to the “New Look” in social psychology, which flourished in the 1950s. Bruner and Goodman 1947.

Ten years later, Heider developed attribution theory by observing that people predictably overattribute to disposition and underweigh situation.54 Heider's early observations led to what is known as the “fundamental attribution error.”55 After losing its mind to behaviorism, psychology welcomed the mind back into its explanations, but these explanations focused on deviations from rationality. Tversky and Kahneman follow in this research tradition and have influenced how political psychologists approach rationality.

Tversky and Kahneman's “biases and heuristics” approach to judgment demonstrates that people systematically deviate from rational standards.56

They premised their research on reasoning backwards from mistakes (in judgment) to errors (in process). The biases and heuristics approach compares subjects' answers to reasoning problems to the normatively correct answer that statistics or probability theory provides. People routinely answer reasoning questions incorrectly. Researchers then identify the difference between the normative answer and the actual answer as a reasoning bias needing explanation. Shortcuts in the reasoning process—or heuristics—explain the bias. Tversky and Kahneman found that the statistically savvy avoid elementary errors, but “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.”57 Tversky and Kahneman created well-defined experiments that allowed them to use statistics and probability theory, not psychology, to create a rational baseline; they then used psychology to explain why most people, most of the time, do not provide the normatively correct answers. The biases and heuristics program has cataloged such an impressive array of pervasive judgmental errors and mistakes that Funder, a critic of the approach, entitled his review of a social psychology text in this tradition, “Everything You Know Is Wrong.”58

The biases and heuristics research program encouraged political psychologists to focus on procedural errors that led to mistaken judgments. However, extending this approach to study decision making in international politics results in two problems. The first problem is identifying correctly the procedural error that led to the mistaken judgment. Analysts can attribute bad decisions to the need for cognitive consistency, improper assimilation of new data to old beliefs, the desire to avoid value trade-offs, groupthink, idiosyncratic schemas, motivated or emotional bias, reliance on heuristics because of cognitive limitations, incorrect use of analogies, the framing of information, feelings of shame and humiliation, or a miserable childhood.59

For cognitive consistency and assimilation, see Jervis 1976; tradeoffs, see Lebow 1981; groupthink, see Janis 1982; idiosyncratic schemas, see Goldgeier 1994, and Larson 1985; motivated biases, see Lebow and Stein 1987; heuristics, see Mintz et al. 1997, and Simon 1982; analogies, see Houghton 2001, and Khong 1992; frames, see McDermott 1998; humiliation, see Steinberg 1996; and childhood, see George and George 1964, and Post 2003.

Discovering the correct error is difficult in an uncontrolled field setting when so many errors are possible.

The second problem is identifying the mistaken judgment. Was President George W. Bush mistaken to believe that Iraqi President Saddam Hussein posed an imminent threat to the United States in spring 2003? If Bush was correct, then analysts do not need to bother with heuristics (unless he was right for the wrong reasons).60

Rosen argues that Kennedy adhered to a nonnormative decision-making process during the Cuban Missile Crisis. Rosen 2004.

If he was wrong, then heuristics might be helpful (unless he knew he was wrong and simply lied). As Jervis observed, “It is hard to know what a person's perceptions were and even harder to know whether they were correct.”61 Experts can even disagree over the correct answer to statistical problems. For example, Gigerenzer contends that Bayesian approaches provide one answer to Tversky and Kahneman's logic problems, but a statistician using a “frequentist” approach would arrive at a different answer.62

A “frequentist” rejects the Bayesian view of probability as a subjective measure of belief, and instead uses the relative frequency of repeatable events to explain probability. See Gigerenzer 1991; and Hammond 1996.

Because the answers are equally valid, what a Bayesian views as a judgmental mistake needing explanation, a frequentist might view as an accurate judgment. One cannot use statistics or probability theory to provide a rational baseline for problems that lack unique logical or statistical solutions.

When logic yields multiple answers to the same problem, analysts must look elsewhere, such as to folk psychology, to assess rationality. Assessing the rationality of a decision-making process demands that three questions be addressed: What did the actor want, believe, and do? After establishing an actor's desires, beliefs, and actions, analysts can address how well the actor adhered to the rules of inference.63

Did the actor consider alternative explanations or hunt for disconfirming evidence? Analysts might also ask how well grounded were the actor's beliefs, or how sensible were the actor's desires. Whereas social psychologists can establish rationality without reference to the mind, in most cases political psychologists establish rationality only after determining an actor's beliefs and desires. In other words, political psychologists typically rely on folk psychology to understand rationality.

Cognitive biases and emotion can be sources of mistakes, and understanding how cognitive biases and emotion subvert rationality is important. Yet the biases and heuristics research design presupposes unique solutions to statistical problems. In the absence of unique solutions, both rationalists and political psychologists must first establish an actor's desires and beliefs before judging behavior as rational. Only with the help of cognitive dissonance theory can one acknowledge that rationality depends on mental phenomena while simultaneously disparaging mental phenomena as a source of irrationality. Because rationality depends on psychology, psychology must do more than explain mistakes.

Psychology and Rational Baselines

The belief that psychology explains mistakes implies that psychological explanations require rational baselines. Analysts must know what is rational before they can know what is not rational. Rational models provide standards that, ceteris paribus, explain how one should reason and what constitutes a rational judgment. For example, because of misconceptions about chance, people believe that a coin toss is more likely to result in the pattern H-T-H-T-H-T than in H-H-H-T-T-T.64

This belief is a mistake. A coin does not have a memory and probability theory tells one what to expect. If it turns out that a particular coin is more likely to land H-T-H-T-H-T, then the problem is with the coin or the toss, but not with the laws of probability. Similarly, rationalists argue that analysts can understand the influence of psychology as a cause of war only after establishing an “ideal” explanation of how rational actors should behave.65 When a rational process fails to yield an accurate judgment, the problem is with the actor or the environment, but not with the baseline itself. I argue, to the contrary, that rational models of problems more complex than coin tosses will sometimes be a source of mistakes. In some cases, analysts can turn to psychology to prevent mistakes that rational baselines cause.

For example, rational reputation models advise decision makers that reputations form predictably and systematically, and are worth defending.66

Common sense, formal models, and logic tell rationalists how reputations form and thus how rational actors should behave. What is striking about the rational reputation literature is that it should even exist. Dictionaries define a reputation as a judgment about someone's character, yet character or disposition is psychology's home turf. Riker understood that it makes “no sense in terms of rational choice theory to attempt to attribute consistent character traits to actors.”67 Driving the cause of war and peace back into the personality of decision makers smacks of “mentalism,” which rationalists and behaviorists gave up long ago in the name of science. Behaviorists would say that food, not an internal state of hunger, causes a pigeon to peck a bar. Or, brittle glass breaks because it is hit by a stone, not because of an inherent quality of brittleness.68

Moore 1999, 64–65. This behaviorist explanation works only if one takes the characteristics of glass for granted; a rock dropped on a sheet of steel has a different effect. Jervis, personal communication.

Rationalists praised behaviorists for liberating psychology from the mind and creating a true science.69

For example, Riker 1962, 7.

In the spirit of the behaviorists, rationalists improved the concept of reputation by making it rational, which meant eliminating the psychology. Once purged of all things mental, rationalists can use reputation to address everything from market failure to collective action and commitment problems.70

See respectively, Keohane 1984; Ostrom 1998; and Fearon 1995.

Insisting that a rational baseline precede a psychological explanation makes sense if that baseline first, is free of psychology, and second, provides the best means to a desired end. Rational reputation arguments fall short on both counts.

First, eliminating psychology from a psychological concept also eliminates its explanatory power. For example, some rationalists define reputation as a judgment about someone's situation.71

See Sartori 2002, 136; and Downs and Jones 2002, 101.

In this case, a reputation is a function not of mental properties but of the costs and benefits of a situation. If a “mindless” cost-benefit calculus explains behavior, then the situation, not a reputation, explains behavior. “Situational reputation” is an oxymoron. Reputation's explanatory power depends on explaining either why an actor behaves similarly in different situations or why different actors respond differently to the same situation. When situational incentives govern—such as when people flee a burning building—reputation is irrelevant. When situational incentives do not govern, or when an actor behaves contrary to compelling situational incentives—such as when someone stays inside a burning building—then an actor's beliefs, desires, subjective calculations of costs and benefits, or reputation may be relevant.72 Draining psychology from a psychological concept guts its explanatory power.

Second, an analyst's reliance on unacknowledged psychological assumptions can cause mistakes. Just as behaviorists used terms such as “drive” to replace ordinary terms such as “desire,” rationalists use elaborate definitions of reputation to evade psychology. For example, a reputation becomes a belief about the probability that another is a particular “type.” A “type” is a player's private information about its expected utility for particular outcomes. Its expected utility is its payoff for a particular outcome, and observers infer probable payoffs from behavior.73

Morrow 1994, 55, 219, 242.

In order for a revealed reputation argument to explain or predict behavior, payoffs must be a function, at least in part, of an actor's disposition: beliefs, desires, character, and all things internal, mental, and unobservable. If an actor's payoffs are a function of the environment, then one is stuck with the oxymoron of “situational reputation.” Substituting “type” for disposition allows rationalists to rely on psychological assumptions (such as the existence of a type that always keeps or always breaks promises) while avoiding a psychological term.74

Ibid., 281.

This substitution matters if the unacknowledged psychological assumption causes empirical inaccuracies.

Psychologists once thought they could type personality the way one types blood, but this view gave way to mounting evidence that a person's behavior correlates modestly from situation to situation. In the 1960s, psychologists suggested that subjective understandings of situations explained responses to those situations.75

Mischel 1999, 413–16.

Actors behave characteristically in particular situations as long as the actor perceives the situation as similar. It is the actor, not the analyst, who decides which situations, or which features of a situation, are similar to earlier situations.76

Ibid., 429. See also Jervis 1976. For a review of personality psychology, see Funder 2001.

How one responds to different stimuli (or to different situations) depends on how one construes those stimuli. Because rationalists deny their reliance on psychology, they do not address their crucial assumption that disposition is diagnostic of future behavior. Although assumptions can be wrong but useful, rational reputation models fail repeated empirical tests.77 Inaccurate predictions should encourage rationalists to revisit their psychological assumptions.

The two observations above—that psychological concepts drained of psychology lose explanatory punch and that neglect of psychological assumptions can result in mistakes—reinforce a final point: no reason exists to privilege rationalist over psychological explanations. A correspondence approach and a coherence approach to rationality can yield competing means to the same desired end, and no principled reason exists to prefer one over the other. For example, Mercer argues that because reputations for resolve form rarely, rational actors should not fight wars over reputation.78

Mercer uses attribution theory to understand when people will use dispositional explanations to account for someone's behavior. The only way to know whether reputations should form, or to understand when reputations do form, is by understanding how people think. In the case of reputation formation, or more generally in problems of strategy (such as finance or nuclear deterrence), knowing how people tend to reason and tend to behave is crucial to making accurate judgments.79

For psychological approaches to finance, see Shleifer 2000; for deterrence, see Jervis 1984, and Lebow 1987.

Predicting reputations is different than predicting the distribution of heads or tails in a coin flip. Coins are mindless; people (generally) are not. The point is not that rational reputation arguments must be wrong, but that psychological explanations might be right. Observing that Mercer's reputation argument might be correct means recognizing that, in principle, psychological explanations can correct mistakes that rational models cause.

Even on grounds of parsimony and generalizability, no reason exists to privilege rationalist explanations over psychological ones.80

Rabin points out that psychological approaches can be “simpler, more tractable, and more useful (less post hoc) explanations than existing models…. [T]here is no inherent negative correlation between psychological realism on the one hand, and taste for tractability, formalism, parsimony, and simplification on the other hand.”81

Rabin 2002, 673–74.

For example, loss aversion and mental accounting may serve as a simpler and superior model to expected utility theory, and a simple attribution theory may beat complex rational reputation models.82

For the logical but absurd consequences of adhering to expected utility theory, see Rabin 2000. For further discussion and an alternative, see Rabin and Thaler 2001; and Camerer and Thaler 2003.

Parsimony and generalizability are attributes of theories, not of disciplines. The belief that rationalist models necessarily provide the best means to a given end, that they are free of psychology, and so should precede psychological explanations, is wrong. Coherence models can be powerful and revealing and helpful, but so too can correspondence models, and the use of one does not depend on the existence of the other.

Emotion and Accurate Judgments

The belief that psychology explains mistakes implies that psychology cannot explain accurate judgments. Scholars typically view cognitive biases and emotion as undermining rationality, which is why rationality is thought to be free of psychology and why psychology is thought to require a rational baseline. By eliminating the mind, analysts can determine what constitutes rational behavior. This view is mistaken: rational models typically rely on psychological assumptions; rational baselines need not precede psychological explanations; and, as I argue in this section, nonnormative (or nonrationalist) decision-making processes can provide the best means to a given end. I use emotion to demonstrate the value of a correspondence approach to rationality.

Political scientists generally treat emotion as distinct from, and ideally subordinate to, rationality.83

For exceptions, see McDermott forthcoming; Marcus 2003; and Crawford 2000.

As a character in Skinner's utopia put it, “We all know that emotions are useless and bad for our peace of mind and our blood pressure.”84 Political scientists typically emphasize the primacy of cognition and allow emotion to play a role only when needed to explain irrational choice. Scholars view emotion as a cost or benefit to be modeled, a tool to be manipulated to send signals or impart credibility, or as an inevitable but unfortunate aspect of human decision making.85

See respectively, Becker 1976 and Ostrom 1998; Frank 1988 and Hirshleifer 1993; and Elster 1989 and 1999.

Frank, an economist, argues that people have emotional predispositions that commit them to behavior that, while not in their immediate self-interest, may be in their long-term interest: emotions are “merely incentives to behave in a particular way.”86 Frank's observation that emotion can be adaptive by providing people with cues to help navigate their world is insightful. However, recent work in neuroscience suggests that emotion is not merely a tool of rationality but instead is necessary to rationality. This perspective encourages a new look at international relations concepts. I discuss how emotion makes “trust,” “identity,” and solutions to collective action problems possible.

Developments in neuroscience—such as brain imaging techniques that allow scientists to examine emotion in the brain—have contributed to the return of emotion to psychology and the introduction of neuroscience to economics.87

See Glimcher 2003; and Wall Street Journal, 15 November 2002, B1.

After decades of research and exposure to patients such as “Elliot,” the neuroscientist Damasio rejected the traditional view that separate neural mechanisms operate for emotion and for reason.88 After Elliot had a brain tumor removed, he underwent a radical change in personality and was no longer able to hold a job, but could not collect disability benefits because health professionals declared him to be of sound mind and body. Damasio was asked to determine what, if anything, was wrong with Elliot. Damasio found that Elliot was a smart and coherent young man who retained memory, language, and other cognitive skills, but lost his ability to “feel” or to have emotion. Elliot was “the coolest, least emotional, intelligent human being one might imagine, and yet his practical reason was so impaired that it produced … a succession of mistakes, a perpetual violation of what would be considered socially appropriate and personally advantageous.”89

Ibid., xi.

Damasio had other emotionally flat patients with similar problems. For example, one patient, when asked to decide between two dates for his next appointment, was stuck in a “tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences.”90

Ibid., 193.

The patient spun out a mental decision tree for nearly thirty minutes until he was told that the second of two dates was better.

Damasio's research on the brain is not a parlor trick. People without emotion may know they should be ethical, and may know they should be influenced by norms, and may know that they should not make disastrous financial decisions, but this knowledge is abstract and inert and does not weigh on their decisions. They do not care about themselves or about others, and they neither try to avoid making mistakes nor are they capable of learning from their mistakes. They “know” but cannot “feel” and, consequently, they make mistakes.91

Ibid., 45.

Emotion is necessary to rationality and intrinsic to choice. Emotion precedes choice (by ranking one's preferences), emotion influences choice (because it directs one's attention and is the source of action), and emotion follows choice (which determines how one feels about one's choice and influences one's preferences). Yet theories of rational decision making imagine a process free of emotion, reintroducing it only to explain mistakes. Emotion can, of course, lead to mistakes, but so can cognition, and no one suggests that cognition is only a source of mistakes.

If rationality depends on emotion, then correspondence models of rationality may have an advantage over coherence models. In this case, idealized (and emotionless) models of decision making may be a source of mistakes. For example, the rational choice of defection in a PD game, or in a collective action problem, is normatively self-defeating (because it reliably fails to maximize expected utility) and positively wrong (because cooperation is common). Since the mid-1970s, experimentation on PD and strategically equivalent games (such as commons dilemmas) has invariably found what one researcher calls “rampant cooperation.”92

People who irrationally cooperate in PD commonly do better than those who rationally defect. In a variety of games of strategic interaction, the rational choice is typically the worst choice.93

See Colman's discussion of PD, and of Matching, Centipede, and Ultimatum games. Colman 2003.

Although neurologists and neuroeconomists suggest that humans may be hardwired to cooperate, I focus on how emotion solves problems of strategic interaction.94

For the neuroscience of cooperation, see Rilling et al. 2002. For a review of evolutionary models of cooperation and reputation, see Fehr and Fischbacher 2003. For neuroscience and political psychology, see Cacioppo and Visser 2003. For the use of neuroscience to examine decision making and war termination, see Rosen 2004.

Collective action becomes a problem when the benefits of noncooperation are greater than the benefits of cooperation with the group, unless no one else cooperates with the group, in which case the costs of noncooperation exceed the costs of cooperation.95

If incentives encourage people to be selfish, but collective selfishness undermines desired outcomes or important societal values, how do people solve these problems? Changing the payoffs is one solution, but this abolishes rather than solves the problem.96 Taylor notes “external” and “internal” solutions to collective action problems.97 External solutions include reliance on political entrepreneurs or on property rights, whereas internal solutions rely on, for example, a sense of community. Like Taylor, I emphasize community or identity as an internal solution to collective action problems.

Trust is important to solving collective action problems, and I suggest that emotion is the basis of trust. As noted by philosophers, trust is a feeling of optimism in another's goodwill and competence.98

Political scientists suggest that trust means giving the other the benefit of the doubt, or is a feeling that the other has benevolent intentions.99

See Putnam 2000, 136; and Larson 1997, 20.

A social psychologist finds that trust requires feelings of “warmth and affection.”100 I suggest that trust is an emotional belief. Emotional beliefs are generalizations about internal, enduring properties of an object that involve certainty beyond evidence.101 Trust requires certainty beyond observable evidence and reliance instead on how one feels about someone. For example, how people feel influences their interpretations of another's behavior. People give the benefit of the doubt to those they trust, and doubt anything beneficial done by those they distrust. While using internally generated feelings as evidence for trust departs from standard models of rationality, people who are incapable of using their feelings are irrational, and those who ignore their feelings and base cooperation on observable facts cannot trust.102

For feelings as evidence, see Clore and Gasper 2000, 25.

Rationalists do to trust what they do to reputation.103

The belief that psychology explains mistakes, and thus the fear of “going mental,” leads rationalists to drain psychology from psychological concepts. As the philosopher Becker noted, rational accounts of trust appear to eliminate that which they claim to describe.104 Rationalists drain the psychology from trust by turning it into a consequence of incentives. Emphasizing incentives as the basis for trust eliminates both the need for trust and the opportunity to trust. If trust depends on external evidence, transparency, iteration, or incentives, then trust adds nothing to the explanation.105 Equally importantly, emphasizing incentives as the basis for trust eliminates the opportunity for trust to form. Trust is a feeling toward someone, not toward an inanimate object or toward a situation. I trust my friends but rely on my car.106 If observers attribute cooperation to the environment rather than the person, then trust cannot—and need not—develop. My argument is not that incentives are unimportant—though Chomsky's observation that incentives depend on beliefs and desires merits repeating—but that incentive-based behavior is not a substitute for trust-based behavior.

I argue that identity produces emotion that creates trust, which then solves collective action problems. Social identity theory (SIT) dominates psychological approaches to group—not individual—behavior, and explains emotion's role in identity.107

For a review of SIT, see Brown 2000.

Tajfel, a psychologist, sought to understand the minimal conditions necessary to trigger intergroup competition.108 Surprisingly, merely placing someone in an imaginary group is sufficient to trigger out-group discrimination. The experimental results led Tajfel and Turner to create SIT: when I identify with a group, the group becomes a part of me and, as a result, I seek to view my group (and myself) as different and better than other groups.109

Tajfel and Turner 1986. Putting the group in the individual avoids the problem of aggregating preferences (which in most cases cannot be done rationally). See Arrow 1963.

Although most recent portrayals of SIT marginalize or drop emotion, Tajfel recognized the importance of emotion to identity.110

For emotion in identity, see Gries 2004. For neglect of emotion in SIT, see Mercer 1995.

Tajfel defined social identity as “that part of an individual's self-concept which derives from his knowledge of his membership of a social group (or groups) together with the value and emotional significance attached to that membership.”111

Tajfel 1978, 63 (emphasis in the original).

A strong feeling of group identity leads to sharing, cooperation, perceived mutuality of interests, and willingness to sacrifice personal interests for group interests. It is only because people invest groups with emotional significance that they care about them; if people do not care, they neither cooperate nor compete.

The emotion in identity provides the basis for trust. The psychologist Brewer observed that group members “uniformly evaluate members of their own group as high relative to those from other groups on characteristics such as trustworthiness, honesty, and loyalty. This consistency in the content of intergroup perceptions has led to the speculation that, in addition to serving functions related to positive personal identity, in-group bias also serves to solve a pervasive social dilemma—that of the problem of interpersonal trust.”112

Brewer 1981, 351–52 (emphasis in the original).

Solving the problem of interpersonal trust solves collective action problems. Because identification with a group produces “positive emotions such as admiration, sympathy and trust,” individuals will forgo their short-term interests and cooperate to solve a common problem.113 As Tyler and his colleagues have discussed, the evidence linking identity and trust to solving collective action problems is extensive.114 Trusting individuals cooperate (by restricting their consumption of a shared resource) even when they have information that others are not restricting their own consumption.115 Others have shown that identity is crucial to reducing competition within the group, and that identification with a group is linked to a willingness to conserve communal resources.116 Even when it is in the interest of cooperators to leave their group, they do not.117

Orbell et al. 1988. For emotion's influence on out-group discrimination in minimal group settings, see DeSteno et al. 2004.

The emotion in identity makes trust available to solve collective action problems.

Because identity demands discrimination, relying on identity and trust to solve collective action problems is a double-edged sword. Tajfel singled out the esteem emotions as the driving force behind competition. It is the feeling of self-esteem, or pride, that makes people in groups view their group as different and better than other groups. In general, the more positively one feels about one's group, the more negatively one feels about rivalrous groups.118

In-group trust does not require out-group distrust—which is a feeling of pessimism about another's goodwill and competence—but it does require one to distinguish between trusting one's group and not trusting an out-group.119

For distrust, see Jones 1996. For trust and out-groups, see Brewer 1999; and Brewer and Brown 1998, 575.

Whether discrimination is a consequence of animus or an unintended consequence of preference for one's group, the result for the out-group is similar. After slaughtering 12,000 heathen in a day, Joshua gave thanks to God by carving the commandment, “Thou shall not kill.” Joshua was not being hypocritical, for the prohibition applied only to the in-group, just as Moses preached to “love thy neighbor,” as long as the neighbor was a member of the tribe.120 Preferential treatment of one's own group is a source of peace and cooperation within the group, and often of distrust and hostility between groups.121

Emotion drives in-group cooperation and out-group discrimination. If this perspective is correct, then an emphasis on material incentives, abstract cognitive beliefs, or incomplete information is mistaken.122

Stuart Kaufman reaches similar conclusions in his study of ethnic conflict. Kaufman 2001.

The emotion in identity explains why group members may trust each other and why they may distrust out-group members. For example, should analysts view alliances between states as instrumental, or should they sometimes view them as reflecting a security community of shared values, beliefs, and trust? Both realists and constructivists eliminate emotion from their assessments. Constructivists, for example, view Deutsch's “we feeling” (which provides the basis for a security community) as indicating purely cognitive beliefs free of affect.123

See Adler 1997, 255, 263; and Wendt 1999, 319.

Because political scientists typically view emotion as irrational, it is not surprising that constructivists, no less than rationalists, are eager to purge it from their explanations.124 Instead of running from emotion, recognizing emotion's role in trust and in identity may help analysts better understand how alliances might work and how security communities might form. Emotion in identity—or “we feeling”—may be key to explaining when security communities form and, at least according to realists, why they form so rarely.

A correspondence approach to rationality ought to be better than a coherence approach when the explanation for a phenomenon rests on psychological concepts such as trust, identity, or reputation. For example, “justice” depends on emotion. As Solomon argued, “The idea that justice requires emotional detachment, a kind of purity suited ultimately only to angels, ideal observers, and the original founders of society has blinded us to the fact that justice arises from and requires such feelings as resentment, jealousy, and envy as well as empathy and compassion.”125

If emotion subverts rationality, then any conception of justice that depends on emotion must capture irrational behavior. Rationalists drain emotion from justice and assume self-interested actors that have preferences over outcomes rather than preferences over strategies. Rational people care only about the realization of their preferences and are indifferent to how an outcome is obtained; they care about their final payoffs but not the payoffs of others.126 Yet preferences over process appear to be just as important. How a decision is made, what the other's motivation appears to be, and whether one is treated with respect, consideration, and fairness often matter more than the outcome.127

See Tyler and Degoey 1996; Rabin 2002; Camerer and Thaler 2003; and Kier 2004. See also, for experimental work on the Ultimatum game, Colman 2003; and Henrich et al. 2001.

Although rationalists can use nonstandard hedonic values in their utility functions, doing so risks creating unfalsifiable arguments and, even worse, acknowledging a reliance on psychology. While political psychologists recognize that justice depends on emotion, this dependence leads them to view a concern for justice as a deviation from rationality.128 It is as if a rational person must be greedy but indifferent to justice.

Recognizing emotion's central role in justice has policy implications. U.S. forces in Iraq have accidentally killed thousands of Iraqi innocents. Their family members demand justice. A rationalist solution might emphasize monetary compensation, because rational people are self-interested and have preferences over outcomes. A psychological solution would attend to process: expressions of regret, recognition that a grave injustice occurred, and acknowledging changes in procedure to make accidental deaths less likely. Addressing injustice in Iraq depends on whether Iraqis are driven more by greed—better known as self-interest—or, for example, by anger.129

Philosophers in the early seventeenth century began to view greed as so powerful that, when harnessed with reason, it might tame the other passions. Thus began the semantic shift from the emotion “greed” into emotionless (and virtuous) “interest.” Hirschman 1977.

The mistake is viewing greed as rational and anger as irrational. They are both emotions and one is no less rational than the other. Draining emotion from justice leads analysts to misunderstand the concept and, perhaps, to pursue self-defeating policies in Iraq.

Elster suggests that because emotion undermines rationality, the only problems emotion solves are those it causes in the first place.130

Emotion can cause problems (such as out-group discrimination), but emotion can also solve problems that rational models cause (such as the rational strategy of defection in PD). The concepts “identity” and “trust” have causal power only when analysts recognize their dependence on emotion. Identification without feeling implies a cold, neutral, bloodless observation; it inspires no action. Trust without emotion implies an expectation of trustworthiness based on incentives; trust adds nothing if incentives explain cooperation. Emotion makes identity, trust, and solutions to collective action problems possible. Political scientists should view emotion not only as a problem to be overcome, but also as a solution that can help individuals and groups overcome problems. Damasio has shown that rationality depends on emotion, and I have discussed how emotion may solve collective action problems and may contribute to understanding alliance formation and justice. Psychology does more than explain mistakes. It also helps prevent them.

Conclusion

Apprehending rationality is hard, and it is made harder by the belief that psychology explains only mistakes. This belief results in three myths: that rational explanations are free of psychology, that psychological explanations demand a rational baseline, and that psychology cannot explain accurate judgments. To the contrary, rational models usually rest on psychological assumptions, a correspondence model does not depend on the existence of a coherence model, and psychology can explain accurate judgments. For example, psychological economists use loss aversion and mental accounting to prevent mistakes that expected utility theory causes, neuroscientists use emotion to explain rational behavior, political psychologists use attribution theory to explain reputation formation, and I use emotion in trust and identity to solve collective action problems. The point is not that correspondence models should replace coherence models, but that no single approach has a lock on understanding rationality. In some important contexts (such as in strategic choice) or when using certain concepts (such as reputation, trust, identity, or justice), a correspondence approach to rationality may beat a coherence approach.

Rejecting the belief that psychology explains only mistakes should help rationalists overcome their fear of “going mental” and thus encourage them to pay attention to the psychological assumptions that often drive their explanations. Rejecting the belief should also encourage political psychologists to stop conceding rationality to the rationalists. Political psychology is—or at least should be—as much about accurate judgments as inaccurate ones. Finally, rejecting these beliefs makes it possible for rationalists and political psychologists to view emotion and cognition as contributing to rational behavior rather than only undermining it.

References

REFERENCES

Abelson, Robert P. 1994. A Personal Perspective on Social Cognition. In Social Cognition: Impact on Social Psychology, edited by Patricia G. Devine, David L. Hamilton, and Thomas M. Ostrom, 1537. New York: Academic Press.
Achen, Christopher H., and Duncan Snidal. 1989. Rational Deterrence Theory and Comparative Case Studies. World Politics 41 (2):14369.CrossRefGoogle Scholar
Adler, Emanuel. 1997. Imagined (Security) Communities: Cognitive Regions in International Relations. Millennium 26 (2):24977.CrossRefGoogle Scholar
Alhadeff, David A. 1982. Microeconomics and Human Behavior: Toward a New Synthesis of Economics and Psychology. Berkeley: University of California Press.
Alt, James E., and Kenneth A. Shepsle. 1990. Editors' Introduction. In Perspectives on Positive Political Economy, edited by James E. Alt and Kenneth A. Shepsle, 15. Cambridge: Cambridge University Press.
Aronson, Elliot, Timothy D. Wilson, and Robin M. Akert. 1994. Social Psychology: The Heart and the Mind. New York: Harper Collins.
Arrow, Kenneth J. 1963. Social Choice and Individual Values. New York: John Wiley and Sons.
Becker, Gary S. 1976. The Economic Approach to Human Behavior. Chicago: University of Chicago Press.
Becker, Lawrence C. 1996. Trust as Noncognitive Security about Motives. Ethics 107 (1):4361.CrossRefGoogle Scholar
Brewer, Marilynn B. 1979. In-Group Bias in the Minimal Intergroup Situation: A Cognitive-Motivational Analysis. Psychological Bulletin 86 (2):30724.CrossRefGoogle Scholar
Brewer, Marilynn B. 1981. Ethnocentrism and Its Role in Interpersonal Trust. In Scientific Inquiry and the Social Sciences, edited by Marilynn B. Brewer and Barry E. Collins, 34560. San Francisco: Jossey-Bass.
Brewer, Marilynn B. 1997. The Social Psychology of Intergroup Relations: Can Research Inform Practice? Journal of Social Issues 53 (1):197211.Google Scholar
Brewer, Marilynn B. 1999. The Psychology of Prejudice: Ingroup Love or Outgroup Hate? Journal of Social Issues 55 (3):42944.Google Scholar
Brewer, Marilynn B., and Rupert J. Brown. 1998. Intergroup Relations. In Handbook of Social Psychology, 4th ed., Vol. 2., edited by Daniel T. Gilbert, Susan T. Fiske, and Gardner Lindzey, 55494. Boston: McGraw-Hill.
Brewer, Marilynn B., and Roderick M. Kramer. 1986. Choice Behavior in Social Dilemmas: Effects of Social Identity, Group Size, and Decision Framing. Journal of Personality and Social Psychology 50 (3):54349.CrossRefGoogle Scholar
Brown, Rupert. 2000. Social Identity Theory: Past Achievements, Current Problems and Future Challenges. European Journal of Social Psychology 30 (6):74578.3.0.CO;2-O>CrossRefGoogle Scholar
Bruner, Jerome S., and Cecile C. Goodman. 1947. Value and Need as Organizing Factors in Perception. Journal of Abnormal and Social Psychology 42 (1):3344.CrossRefGoogle Scholar
Bueno de Mesquita, Bruce, and Rose McDermott. 2004. Crossing No Man's Land: Cooperation from the Trenches. Political Psychology 25 (2):27187.CrossRefGoogle Scholar
Cacioppo, John T., and Penny S. Visser, eds. 2003. Neuroscientific Contributions to Political Psychology. Political Psychology 24 (4):647768.CrossRefGoogle Scholar
Camerer, Colin, and Richard H. Thaler. 2003. In Honor of Matthew Rabin: Winner of the John Bates Clark Medal. Journal of Economic Perspectives 17 (3):15976.CrossRefGoogle Scholar
Chomsky, Noam. 1959. Review of B. F. Skinner, Verbal Behavior. Language 35 (1):2658.CrossRefGoogle Scholar
Clore, Gerald L., and Karen Gasper. 2000. Feeling Is Believing: Some Affective Influences on Belief. In Emotions and Beliefs: How Feelings Influence Thoughts, edited by Nico H. Frijda, Antony S. R. Manstead, and Sacha Bem, 1044. Cambridge: Cambridge University Press.
Colman, Andrew M. 2003. Cooperation, Psychological Game Theory, and Limitations of Rationality in Social Interaction. Behavioral and Brain Sciences 26 (2):13953.CrossRefGoogle Scholar
Crawford, Neta C. 2000. The Passion of World Politics: Propositions on Emotion and Emotional Relationships. International Security 24 (4):11656.CrossRefGoogle Scholar
Dahl, Robert A. 1969. The Behavioral Approach in Political Science: Epitaph for a Monument to a Successful Protest. In Behavioralism in Political Science, edited by Heinz Eulau, 6892. New York: Atherton Press.
Damasio, Antonio R. 1994. Descartes' Error: Emotion, Reason, and the Human Brain. New York: Putnam.
Davis, James W. 2000. Threats and Promises: The Pursuit of International Influence. Baltimore, Md.: Johns Hopkins University Press.
Dawes, Robyn M. 1980. Social Dilemmas. Annual Review of Psychology 31:16993.CrossRefGoogle Scholar
Dawes, Robyn M. 1998. Behavioral Decision Making and Judgment. In The Handbook of Social Psychology, 4th ed., Vol. 1., edited by Daniel T. Gilbert, Susan T. Fiske, and Gardner Lindzey, 497548. Boston: McGraw-Hill.
Dawes, Robyn M., and Richard H. Thaler. 1988. Anomalies: Cooperation. Journal of Economic Perspectives 2 (3):18797.CrossRefGoogle Scholar
DeSteno, David, Nilanjana Dasgupta, Monica Y. Bartlett, and Aida Cajdric. 2004. Prejudice from Thin Air: The Effect of Emotion on Automatic Intergroup Attitudes. Psychological Science 15 (5):31924.CrossRefGoogle Scholar
Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper & Row.
Downs, George W., and Michael A. Jones. 2002. Reputation, Compliance, and International Law. Journal of Legal Studies 31 (1):S95S114.Google Scholar
Egan, Frances. 1995. Folk Psychology and Cognitive Architecture. Philosophy of Science 62 (2):17996.CrossRefGoogle Scholar
Elster, Jon. 1986. Introduction. In Rational Choice, edited by Jon Elster, 133. New York: New York University Press.
Elster, Jon. 1989. Nuts and Bolts for the Social Sciences. Cambridge: Cambridge University Press.
Elster, Jon. 1999. Alchemies of the Mind: Rationality and the Emotions. Cambridge: Cambridge University Press.
Fearon, James D. 1995. Rationalist Explanations for War. International Organization 49 (3):379414.CrossRefGoogle Scholar
Fehr, Ernst, and Urs Fischbacher. 2003. The Nature of Human Altruism. Nature 425 (23):78591.CrossRefGoogle Scholar
Finnemore, Martha. 2003. The Purpose of Intervention: Changing Beliefs About the Use of Force. Ithaca, N.Y.: Cornell University Press.
Finnemore, Martha, and Kathryn Sikkink. 1998. International Norm Dynamics and Political Change. International Organization 52 (4):887917.CrossRefGoogle Scholar
Frank, Robert H. 1988. Passions Within Reason: The Strategic Role of the Emotions. New York: Norton.
Frieden, Jeffry A. 1999. Actors and Preferences in International Relations. Strategic Choice and International Relations, edited by David A. Lake and Robert Powell, 3976. Princeton, N.J.: Princeton University Press.
Friedman, Milton. 1953. The Methodology of Positive Economics. In Essays in Positive Economics, 343. Chicago: University of Chicago Press.
Frijda, Nico H., Antony S. R. Manstead, and Sacha Bem. 2000. The Influence of Emotions on Beliefs. In Emotions and Beliefs: How Feelings Influence Thoughts, edited by Nico H. Frijda, Antony S. R. Manstead, and Sacha Bem, 19. Cambridge: Cambridge University Press.
Frijda, Nico H., and Batja Mesquita. 2000. Beliefs Through Emotions. In Emotions and Beliefs: How Feelings Influence Thoughts, edited by Nico H. Frijda, Antony S. R. Manstead, and Sacha Bem, 4577. Cambridge: Cambridge University Press.
Funder, David C. 1987. Errors and Mistakes: Evaluating the Accuracy of Social Judgment. Psychological Bulletin 101 (1):7590.CrossRefGoogle Scholar
Funder, David C. 1992. Everything You Know Is Wrong. Contemporary Psychology 37 (4):31920.CrossRefGoogle Scholar
Funder, David C. 2001. Personality. Annual Review of Psychology 52:197221.CrossRefGoogle Scholar
George, Alexander L., and Juliette L. George. 1964. Woodrow Wilson and Colonel House: A Personality Study. New York: Dover.
Gigerenzer, Gerd. 1991. From Tools to Theories: A Heuristic of Discovery in Cognitive Psychology. Psychological Review 98 (2):25467.CrossRefGoogle Scholar
Gigerenzer, Gerd, Peter M. Todd, and the ABC Research Group. 1999. Simple Heuristics that Make Us Smart. New York: Oxford University Press.
Glimcher, Paul W. 2003. Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics. Cambridge, Mass.: MIT Press.
Goemans, Hein E. 2000. War and Punishment: The Cause of War Termination and the First World War. Princeton, N.J.: Princeton University Press.
Goldgeier, James M. 1994. Leadership Style and Soviet Foreign Policy: Stalin, Khrushchev, Brezhnev, Gorbachev. Baltimore, Md.: Johns Hopkins University Press.
Goldgeier, James M., and Philip E. Tetlock. 2001. Psychology and International Relations Theory. Annual Review of Political Science 4:6792.CrossRefGoogle Scholar
Granovetter, Mark. 1985. Economic Action and Social Structures: The Problem of Embeddedness. American Journal of Sociology 91 (3):481510.CrossRefGoogle Scholar
Gries, Peter Hays. 2004. China's New Nationalism: Pride, Politics, and Diplomacy. Berkeley: University of California Press.
Hammond, Kenneth R. 1996. Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. New York: Oxford University Press.
Hardin, Russell. 1995. One for All: The Logic of Group Conflict. Princeton, N.J.: Princeton University Press.
Harsanyi, John C. 1986. Advances in Understanding Rational Behavior. In Rational Choice, edited by Jon Elster, 82107. New York: New York University Press.
Hartung, John. 1995. Love Thy Neighbour: The Evolution of In-Group Morality. Skeptic 3 (4):8699.Google Scholar
Hayes, Steven C., Kelly G. Wilson, and Elizabeth V. Gifford. 1999. Consciousness and Private Events. In The Philosophical Legacy of Behaviorism, edited by Bruce A. Thyer, 15387. Boston: Kluwer Academic.
Heider, Fritz. 1958. The Psychology of Interpersonal Relations. London: Lawrence Erlbaum.
Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, Herbert Gintis, and Richard McElreath. 2001. In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies. American Economic Review 91 (2):7378.CrossRefGoogle Scholar
Hirschman, Albert O. 1977. The Passions and the Interests: Political Arguments for Capitalism Before Its Triumph. Princeton, N.J.: Princeton University Press.
Hirshleifer, Jack. 1993. The Affections and the Passions: Their Economic Logic. Rationality and Society 5 (2):185202.CrossRefGoogle Scholar
Hogg, Michael A., and Dominic Abrams. 1988. Social Identification: A Social Psychology of Intergroup Relations and Group Processes. New York: Routledge and Kegan Paul.
Hopf, Ted. 1994. Peripheral Visions: Deterrence Theory and American Foreign Policy in the Third World, 1965–1990. Ann Arbor: University of Michigan Press.
Houghton, David P. 2001. U.S. Foreign Policy and the Iran Hostage Crisis. Cambridge: Cambridge University Press.
Hunt, Morton M. 1993. The Story of Psychology. New York: Doubleday.
Innis, Nancy K. 1999. Edward C. Tolman's Purposive Behaviorism. In Handbook of Behaviorism, edited by William O'Donohue and Richard Kitchener, 97117. New York: Academic Press.
Janis, Irving L. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Boston: Houghton Mifflin.
Jervis, Robert. [1970] 1989. The Logic of Images in International Relations. Reprint, with a new preface. New York: Columbia University Press.
Jervis, Robert. 1976. Perception and Misperception in International Politics. Princeton, N.J.: Princeton University Press.
Jervis, Robert. 1984. The Illogic of American Nuclear Strategy. Ithaca, N.Y.: Cornell University Press.
Jervis, Robert. 1989. Rational Deterrence: Theory and Evidence. World Politics 41 (2):183207.Google Scholar
Jervis, Robert. 2002. Signaling and Perception: Drawing Inferences and Projecting Images. In Political Psychology, edited by Kristen Renwick Monroe, 293312. London: Lawrence Erlbaum.
Jones, Karen. 1996. Trust as an Affective Attitude. Ethics 107 (1):425.Google Scholar
Kahler, Miles. 1998. Rationality in International Relations. International Organization 52 (4):91941.Google Scholar
Kahneman, Daniel. 2000. New Challenges to the Rationality Assumption. In Choices, Values, and Frames, edited by Daniel Kahneman and Amos Tversky, 75874. New York: Cambridge University Press.
Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. 1982. Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.
Kahneman, Daniel, and Amos Tversky. 1996. On the Reality of Cognitive Illusions: A Reply to Gigerenzer's Critique. Psychological Review 103 (3):58291.Google Scholar
Kahneman, Daniel, and Amos Tversky, eds. 2000. Choices, Values, and Frames. Cambridge: Cambridge University Press.
Kaufman, Stuart J. 2001. Modern Hatreds: The Symbolic Politics of Ethnic War. Ithaca, N.Y.: Cornell University Press.
Keohane, Robert O. 1984. After Hegemony: Cooperation and Discord in the World Political Economy. Princeton, N.J.: Princeton University Press.
Khong, Yuen F. 1992. Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965. Princeton, N.J.: Princeton University Press.
Kier, Elizabeth. 2004. Homefront Victory: Mobilizing Labor for Total War. Unpublished manuscript, University of Washington, Seattle.
Kramer, Roderick M., Marilynn B. Brewer, and Benjamin A. Hanna. 1996. Collective Trust and Collective Action: The Decision to Trust as a Social Decision. In Trust in Organizations: Frontiers of Theory and Research, edited by Roderick M. Kramer and Tom R. Tyler, 35789. London: Sage.
Krueger, Joachim I. 1998. The Bet on Bias: A Foregone Conclusion? Psycoloquy 9 (46). 〈http://cogsci.soton.ac.uk/cgi/psyc/newpsy?9.46〉. Accessed 15 September 2004.Google Scholar
Krueger, Joachim I., and David C. Funder. Forthcoming. Towards a Balanced Social Psychology: Causes, Consequences and Cures for the Problem-Seeking Approach to Social Behavior and Cognition. Behavioral and Brain Sciences.
Kydd, Andrew. 2000. Trust, Reassurance, and Cooperation. International Organization 54 (2):32557.Google Scholar
Larson, Deborah Welch. 1985. Origins of Containment: A Psychological Explanation. Princeton, N.J.: Princeton University Press.
Larson, Deborah Welch. 1997. Anatomy of Mistrust: U.S.–Soviet Relations During the Cold War. Ithaca, N.Y.: Cornell University Press.
Lebow, Richard Ned. 1981. Between Peace and War: The Nature of International Crisis. Baltimore, Md.: Johns Hopkins University Press.
Lebow, Richard Ned. 1987. Nuclear Crisis Management: A Dangerous Illusion. Ithaca, N.Y.: Cornell University Press.
Lebow, Richard Ned, and Janice Gross Stein. 1987. Beyond Deterrence. Journal of Social Issues 43 (4):572.Google Scholar
Lewin, Shira B. 1996. Economics and Psychology: Lessons for Our Own Day from the Early Twentieth Century. Journal of Economic Literature 34 (3):12931323.Google Scholar
Marcus, George E. 2003. The Psychology of Emotion and Politics. Oxford Handbook of Political Psychology, edited by David O. Sears, Leonie Huddy, and Robert Jervis, 182221. Oxford: Oxford University Press.
McDermott, Rose. 1998. Risk-Taking in International Politics: Prospect Theory in American Foreign Policy. Ann Arbor: University of Michigan Press.
McDermott, Rose. Forthcoming. The Feeling of Rationality: The Meaning of Neuroscientific Advances for Political Science. Perspectives on Politics.
Mercer, Jonathan. 1995. Anarchy and Identity. International Organization 49 (2):22952.Google Scholar
Mercer, Jonathan. 1996. Reputation and International Politics. Ithaca, N.Y.: Cornell University Press.
Mercer, Jonathan. Forthcoming. Prospect Theory and Political Science. Annual Review of Political Science 8.
Merkl, Peter H. 1969. “Behavioristic” Tendencies in American Political Science. In Behavioralism in Political Science, edited by Heinz Eulau, 14152. New York: Atherton Press.
Messick, David M., Henk Wilke, Marilynn B. Brewer, Roderick M. Kramer, Paul E. Zemke, and Layton Lui. 1983. Individual Adaptations and Structural Change as Solutions to Social Dilemmas. Journal of Personality and Social Psychology 44 (2):294309.Google Scholar
Mintz, Alex, Nehemia Geva, Steven B. Redd, and Amy Carnes. 1997. The Effect of Dynamic and Static Choice Sets on Political Decision Making: An Analysis Using the Decision Board Platform. American Political Science Review 91 (3):55366.Google Scholar
Mischel, Walter. 1999. Introduction to Personality. 6th ed. Fort Worth, Tex.: Harcourt Brace.
Moore, Jay. 1995. Some Historical and Conceptual Relations among Logical Positivism, Behaviorism, and Cognitive Psychology. In Modern Perspectives on B. F. Skinner and Contemporary Behaviorism, edited by James T. Todd and Edward K. Morris, 5174. Westport, Conn.: Greenwood Press.
Moore, Jay. 1999. The Basic Principles of Behaviorism. In The Philosophical Legacy of Behaviorism, edited by Bruce A. Thyer, 4168. Boston: Kluwer Academic.
Morrow, James D. 1994. Game Theory for Political Scientists. Princeton, N.J.: Princeton University Press.
Oatley, Keith. 2000. The Sentiments and Beliefs of Distributed Cognition. In Emotions and Beliefs: How Feelings Influence Thoughts, edited by Nico H. Frijda, Antony S. R. Manstead, and Sacha Bem, 78107. Cambridge: Cambridge University Press.
Orbell, John M., Alphons J. C. van de Kragt, and Robyn M. Dawes. 1988. Explaining Discussion-Induced Cooperation. Journal of Personality and Social Psychology 54 (5):81119.Google Scholar
Ostrom, Elinor. 1998. A Behavioral Approach to the Rational Choice Theory of Collective Action. American Political Science Review 92 (1):122.Google Scholar
Pinker, Steven. 2002. The Blank Slate: The Modern Denial of Human Nature. New York: Viking.
Post, Jerrold M. 2003. Saddam Hussein of Iraq: A Political Psychology Profile. In The Psychological Assessment of Political Leaders, With Profiles of Saddam Hussein and Bill Clinton, edited by Jerrold M. Post, 33566. Ann Arbor: University of Michigan Press.
Press, Daryl G. Forthcoming. Calculating Credibility. Ithaca, N.Y.: Cornell University Press.
Putnam, Robert D. 2000. Bowling Alone: The Collapse and Revival of American Community. New York: Simon and Schuster.
Rabin, Matthew. 1998. Psychology and Economics. Journal of Economic Literature 36 (1):1146.Google Scholar
Rabin, Matthew. 2000. Risk Aversion and Expected-Utility Theory: A Calibration Theorem. Econometrica 68 (5):128192.Google Scholar
Rabin, Matthew. 2002. A Perspective on Psychology and Economics. European Economic Review 46 (4–5): 65785.Google Scholar
Rabin, Matthew, and Richard H. Thaler. 2001. Anomalies: Risk Aversion. Journal of Economic Perspectives 15 (1):21932.Google Scholar
Reynolds, Katherine J., John C. Turner, and S. Alexander Haslam. 2000. When Are We Better than Them and They Worse than Us? A Closer Look at Social Discrimination in Positive and Negative Domains. Journal of Personality and Social Psychology 78 (1):6480.Google Scholar
Ridley, Matt. 1997. The Origins of Virtue: Human Instincts and the Evolution of Cooperation. New York: Viking.
Riker, William H. 1962. The Theory of Political Coalitions. New Haven, Conn.: Yale University Press.
Riker, William H. 1990. Political Science and Rational Choice. In Perspectives on Positive Political Economy, edited by James E. Alt and Kenneth A. Shepsle, 16381. Cambridge: Cambridge University Press.
Riker, William H. 1995. The Political Psychology of Rational Choice Theory. Political Psychology 16 (1):2344.Google Scholar
Rilling, James K., David A. Gutman, Thorsten R. Zeh, Giuseppe Pagnoni, Gregory S. Berns, and Clinton D. Kilts. 2002. A Neural Basis for Social Cooperation. Neuron 35 (2):395405.Google Scholar
Robinson, Daniel N. 1995. An Intellectual History of Psychology. 3d ed. Madison: University of Wisconsin Press.
Rosen, Stephen P. 2004. War and Human Nature. Princeton, N.J.: Princeton University Press.
Rosenberg, Alexander. 1988. Philosophy of Social Science. Boulder, Colo.: Westview Press.
Ross, Lee. 1977. The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process. In Advances in Experimental Social Psychology, Vol. 10, edited by Leonard Berkowitz, 173220. New York: Academic Press.
Ross, Lee. 1990. Recognizing the Role of Construal Processes. In The Legacy of Solomon Asch: Essays in Cognition and Social Psychology, edited by Irvin Rock, 7796. Hillsdale, N.J.: Lawrence Erlbaum.
Ross, Lee, and Richard E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia, Pa.: Temple University Press.
Samuelson, Paul A. 1938. A Note on the Pure Theory of Consumer's Behaviour. Economica 5 (17):6171.Google Scholar
Sartori, Anne E. 2002. The Might of the Pen: A Reputation Theory of Communication in International Disputes. International Organization 56 (1):12149.Google Scholar
Satz, Debra, and John Ferejohn. 1994. Rational Choice and Social Theory. The Journal of Philosophy 91 (2):7187.Google Scholar
Schelling, Thomas C. 1966. Arms and Influence. New Haven, Conn.: Yale University Press.
Schnaitter, Roger. 1999. Some Criticisms of Behaviorism. In The Philosophical Legacy of Behaviorism, edited by Bruce A. Thyer, 20949. Boston: Kluwer Academic.
Sears, David O., Leonie Huddy, and Robert Jervis. 2003. The Psychologies Underlying Political Psychology. In Oxford Handbook of Political Psychology, edited by David O. Sears, Leonie Huddy, and Robert Jervis, 316. Oxford: Oxford University Press.
Sen, Amartya. 1973. Behavior and the Concept of Preference. Economica 40 (159):24159.Google Scholar
Shiller, Robert J. 2000. Irrational Exuberance. Princeton, N.J.: Princeton University Press.
Shleifer, Andrei. 2000. Inefficient Markets: An Introduction to Behavioral Finance. Oxford: Oxford University Press.
Simon, Herbert A. 1982. Models of Bounded Rationality. Cambridge, Mass.: MIT Press.
Skinner, B. F. 1956. A Case History in Scientific Method. American Psychologist 11 (5):22133.Google Scholar
Skinner, B. F. 1972. Beyond Freedom and Dignity. New York: Bantam Books.
Skinner, B. F. 1974. About Behaviorism. New York: Knopf.
Skinner, B. F. 1976. Walden Two. Englewood Cliffs, N.J.: Prentice Hall.
Snyder, Glenn H. 1997. Alliance Politics. Ithaca, N.Y.: Cornell University Press.
Solomon, Robert C. 1990. A Passion for Justice: Emotions and the Origins of the Social Contract. Reading, Mass.: Addison-Wesley.
Stein, Arthur A. 1999. The Limits of Strategic Choice: Constrained Rationality and Incomplete Explanation. In Strategic Choice and International Relations, edited by David A. Lake and Robert Powell, 197228. Princeton, N.J.: Princeton University Press.
Steinberg, Blema S. 1996. Shame and Humiliation: Presidential Decision Making on Vietnam. Pittsburgh, Pa.: University of Pittsburgh Press.
Tajfel, Henri. 1970. Experiments in Intergroup Discrimination. Scientific American 223 (5):96102.Google Scholar
Tajfel, Henri. 1978. Social Categorization, Social Identity and Social Comparison. In Differentiation Between Social Groups: Studies in the Social Psychology of Intergroup Relations, edited by Henri Tajfel, 6176. London: Academic Press.
Tajfel, Henri, and John C. Turner. 1986. The Social Identity Theory of Intergroup Behavior. In Psychology of Intergroup Relations, 2d ed, edited by William G. Austin and Stephen Worchel, 724. Chicago: Nelson-Hall.
Taylor, Michael. 1987. The Possibility of Cooperation. Cambridge: Cambridge University Press.
Tirole, Jean. 2002. Rational Irrationality: Some Economics of Self-Management. European Economic Review 46 (4–5):63355.Google Scholar
Todd, Peter M., and Gerd Gigerenzer. 2000. Précis of Simple Heuristics That Make Us Smart. Behavioral and Brain Sciences 23 (5):72741; open peer commentary, 742–67.Google Scholar
Tversky, Amos, and Daniel Kahneman. 1974. Judgment Under Uncertainty: Heuristics and Biases. Science 185 (4157):112431.Google Scholar
Tyler, Tom R. 1998. Trust and Democratic Governance. In Trust and Governance, edited by Valerie Braithwaite and Margaret Levi, 26994. New York: Russell Sage.
Tyler, Tom R., and Robyn M. Dawes. 1993. Fairness in Groups: Comparing the Self-Interest and Social Identity Perspectives. In Psychological Perspectives on Justice: Theory and Applications, edited by Barbara A. Mellers and Jonathan Baron, 87108. Cambridge: Cambridge University Press.
Tyler, Tom R., and Peter Degoey. 1996. Trust in Organizational Authorities: The Influences of Motive Attributions on Willingness to Accept Decisions. In Trust in Organizations: Frontiers of Theory and Research, edited by Roderick M. Kramer and Tom R. Tyler, 33156. Thousand Oaks, Calif.: Sage Publications.
Tyler, Tom R., and Roderick M. Kramer. 1996. Whither Trust? In Trust in Organizations: Frontiers of Theory and Research, edited by Roderick M. Kramer and Tom R. Tyler, 115. Thousand Oaks, Calif.: Sage Publications.
Uslaner, Eric M. 1998. Social Capital, Television, and the ‘Mean World’: Trust, Optimism, and Civic Participation. Political Psychology 19 (3):44167.Google Scholar
Verba, Sidney. 1961. Assumptions of Rationality and Non-Rationality in Models of the International System. World Politics 14 (1):93117.Google Scholar
Watanabe, Shigeru, Junko Sakamoto, and Masumi Wakita. 1995. Pigeons' Discrimination of Paintings by Monet and Picasso. Journal of the Experimental Analysis of Behavior 63 (2):16574.Google Scholar
Watson, John B. 1913. Psychology as the Behaviorist Views It. Psychological Review 20 (2):15877.Google Scholar
Welch, David A. 1993. Justice and the Genesis of War. Cambridge: Cambridge University Press.
Wendt, Alexander. 1999. Social Theory of International Politics. Cambridge: Cambridge University Press.
Williamson, Oliver E. 1994. Transaction Cost Economics and Organizational Theory. In The Handbook of Economic Sociology, edited by Neil J. Smelser and Richard Swedberg, 77107. Princeton, N.J.: Princeton University Press.
Wilson, David S. 2002. Darwin's Cathedral: Evolution, Religion, and the Nature of Society. Chicago: University of Chicago Press.