Rationality and Psychology in International Politics
Published online by Cambridge University Press: 15 February 2005
Abstract
The ubiquitous yet inaccurate belief in international relations scholarship that cognitive biases and emotion cause only mistakes distorts the field's understanding of the relationship between rationality and psychology in three ways. If psychology explains only mistakes (or deviations from rationality), then (1) rationality must be free of psychology; (2) psychological explanations require rational baselines; and (3) psychology cannot explain accurate judgments. This view of the relationship between rationality and psychology is coherent and logical, but wrong. Although undermining one of these three beliefs is sufficient to undermine the others, I address each belief—or myth—in turn. The point is not that psychological models should replace rational models, but that no single approach has a lock on understanding rationality. In some important contexts (such as in strategic choice) or when using certain concepts (such as trust, identity, justice, or reputation), an explicitly psychological approach to rationality may beat a rationalist one.I thank Deborah Avant, James Caporaso, James Davis, Bryan Jones, Margaret Levi, Peter Liberman, Lisa Martin, Susan Peterson, Jason Scheideman, Jack Snyder, Michael Taylor, two anonymous reviewers, and especially Robert Jervis and Elizabeth Kier for their thoughtful comments and critiques. Jason Scheideman also helped with research assistance.
- Type
- Research Article
- Information
- Copyright
- © 2005 The IO Foundation and Cambridge University Press
Rational choice theorists and political psychologists agree that psychology explains only deviations from rationality. The primary task of political scientists, according to Downs, is to study “rational political behavior, not psychology or the psychology of political behavior.”1
Downs 1957, 9–10.
Harsanyi 1986, 83.
See Verba 1961; and Kahler 1998.
See Fearon 1995; and Goemans 2000.
See Lebow 1981 and 1987; McDermott 1998; and Jervis [1970] 1989, 1976, and 1989.
The ubiquitous yet inaccurate belief in international relations scholarship that cognitive biases and emotion cause only mistakes distorts the field's understanding of the relationship between rationality and psychology in three ways. First, the belief that psychology explains mistakes (or deviations from rationality) has led to the belief that rationality itself must be free of psychology. Second, the belief that psychology explains mistakes has led international relations theorists to believe that psychological explanations need rational baselines. Finally, the belief that psychology explains mistakes has led theorists to believe that psychology cannot explain accurate judgments. Each belief depends on the other. For example, rationality must be free of psychology if psychology explains only deviations from rationality; analysts can only know what constitutes a deviation from rationality after establishing a rational baseline; they can only insist on establishing a rational baseline to assess the role of psychology if that baseline is itself free of psychology. This view of the relationship between rationality and psychology is coherent and logical, but wrong. Although undermining one of these three beliefs is sufficient to undermine the others, I address each belief—or myth—in turn.
Rationality does not depend on psychology. This myth allows rationalists to imagine rational behavior independent of the mind, which is consistent with viewing psychology as explaining mistakes. After all, one cannot rely on mental phenomena as a key to rationality if mental phenomena are a source of mistakes. If rational models rely on psychology, then evidently psychology does more than explain mistakes. The first section of this article examines how political scientists came to believe that rationality is free of psychology. This belief, commonly held by rational choice theorists, can best be understood as part of an intellectual tradition in psychology and in economics that sought to eliminate all mental phenomena from explanations of human behavior. More puzzling is why political psychologists accept as their task explaining why people slip off rational baselines. I discuss a different intellectual tradition in psychology to account for political psychologists' belief that psychology explains only mistakes.
Psychology requires a rational baseline. As Elster noted, rational choice is above all else a normative theory.7
Elster 1986, 1.
Psychology cannot explain accurate judgments. Although recognizing the first two beliefs are myths means psychology does more than explain mistakes, I address the “psychology of mistakes” myth by focusing on emotion. The last section discusses how emotion is necessary to both rationality and to basic international relations concepts such as trust, identity, and justice. For example, I use the emotion in trust to solve collective action problems. Rethinking the relationship between rationality and psychology should help rationalists overcome what one economist describes as an “aggressive uncuriosity” about psychology,8
Rabin 1998, 41.
Rationality Depends on Psychology
The belief that psychology explains mistakes implies that rationality is free of psychology: because rational models cannot be built on an irrational (or psychological) foundation, each must be distinct from the other. I argue that rational models are not distinct from, but rather depend on, psychological assumptions. I use Elster's definition of rationality as using the best means to achieve a given end.9
Elster 1989, 24.
To avoid the terminological confusion that may result from the observation that psychological processes can be rational and that rational (choice) processes can be irrational, I adhere to the conventional view that models, processes, and baselines are rational to the extent that they conform to a rational choice (or “coherence”) standard of rationality. When necessary, I substitute “rationalist” for “rational choice” to distinguish it from “rational.”
For exceptions, see Sears, Huddy, and Jervis 2003, 10–12; and Bueno de Mesquita and McDermott 2004. I do not address the relationship between constructivism, psychology, and rationality. The constructivist focuses on the emergence and role of ideational structures and typically relies on the “logic of appropriateness” instead of the “logic of consequence” (or instrumental rationality) to explain behavior. For a discussion of both logics, as well as of strategic social construction, see Finnemore and Sikkink 1998. For a review of psychology and constructivism, see Goldgeier and Tetlock 2001.
Competing Approaches to Rationality
Rationalists and empiricists debated epistemology in the seventeenth, eighteenth, and nineteenth centuries. Rationalists were committed to deductive rigor, to logic, to mathematical proofs; empiricists were committed to direct experience. As summarized by Robinson, a psychologist and historian of psychological ideas: “If the empiricist traditionally has been content to discover what is, the rationalist considers what must be. The empiricist's evidence has always been the data of experience; the rationalist's, the necessary proofs following from axioms and propositions.”13
Robinson 1995, 199.
Rationalists view coherence as necessary to rational judgment. The coherence view of rationality rests on well-established norms about how one should make decisions, where a good decision is one that rests on logic and adheres to the principles of statistical inference. An incoherent judgment is one that, for example, allows the framing of information to influence judgment, incorrectly assesses probability, or violates the axioms of utility theory. Rational judgments must be free of internal contradictions.14
Dawes 1998, 497.
Colman 2003, 142. Stein discusses the tension between normative and positive explanations. Stein 1999.
A correspondence approach to rationality, which dominates in psychology, is both positive and nonnormative. It is positive as it concerns itself above all with actual decisions. It is nonnormative because it is agnostic on how one should reason: it does not advance rules or procedures but seeks to understand how people reason. Hammond, a psychologist, suggests “correspondence theorists are interested in the way the mind works in relation to the way the world works, while coherence theorists are interested in the way the mind works in relation to the way it ought to work.”16
Hammond 1996, 106. See also Colman 2003, 142.
See Hammond 1996; Funder 1987; Krueger 1998; Gigerenzer et al. 1999; and Rosen 2004.
Whereas rationalists emphasize how a rational process, ceteris paribus, produces accurate judgments, psychologists emphasize how nonnormative processes can result in accurate judgments. Thus the key difference between rationalists and psychologists is not over what they explain, but over how they explain it. Rationalists rely on deduction, statistics, and probability theory, whereas psychologists rely on induction and description of how the mind actually works. Neither focuses primarily on mistakes, though each emphasizes a different way of obtaining the same desired outcome.
Rationalists and Psychology
Behaviorists and neoclassical economists attempted to eliminate the “mind” from causal explanations of human behavior. It was not that mental phenomena caused mistakes, but that it was impossible to know what role mental phenomena played. Behaviorists and economists rejected the metaphysical in favor of the observable and material. They aimed to overthrow “folk psychology” and replace it with science. Although behaviorists did not address rationality directly, their attempt to substitute observable stimuli for mental phenomena is central to contemporary rational choice theory. Methodological reasons also drove economists to eliminate mental phenomena from their explanations. Their apparent success, coupled with a coherence epistemology, led them to believe that psychology is useful only to explain deviations from rationality. These two traditions explain rational choice theorists' belief that psychology explains mistakes.
Eliminating the mind from explanations of behavior required overthrowing folk psychology. The folk psychology of human action, also known as intuitive psychology, is the commonsense belief that one's desires and beliefs explain one's actions.18
See Rosenberg 1988; Egan 1995; and Pinker 2002.
Behaviorism emerged in the early twentieth century as a response to the problematic and unscientific nature of folk psychology.19
In contrast, behavioralism emerged in the 1950s from the efforts of political scientists to apply scientific methods to a range of issues, including cognition and affect. See Merkl 1969; and Dahl 1969.
Watson 1913, 158.
Skinner 1972, 12–13.
Skinner 1974, 104.
See Hayes et al. 1999, 163; and Moore 1995, 81.
Despite behaviorism's thirty-year dominance in psychology, it failed to liberate psychology from the mind.24
For behaviorism's dominance, see Abelson 1994; and Hunt 1993.
Hunt 1993, 294–95.
Edward Tolman discovered what he called “purposive behaviorism” and demonstrated that rats really do think. Innis 1999.
Skinner 1956, 230.
Aronson et al. 1994, 21–23.
Behaviorists thought they eliminated the mind from their explanations, for they focused on what they imagined to be law-like relationships between stimulus and effect. Animals respond to incentives, such as corn pellets or cash rewards, and this allows analysts to explain and predict behavior without reference to mental processes. However, as Chomsky observed, analysts cannot identify a stimulus without first identifying a response, in which case a stimulus is not a property of the environment but of the individual's beliefs and desires. This observation makes clear, as Chomsky notes, “that the talk of ‘stimulus control’ simply disguises a complete retreat to mentalistic psychology.”30
Chomsky 1959, 32.
Even if researchers control the environment and provide only one stimulus, how subjects respond to that stimulus depends not on its physical attributes but on the subjective understanding (or construal) of the stimulus.31
For construal, see Aronson et al. 1994, 16. For the importance of subjective understanding, see Mischel 1999; Ross and Nisbett 1991; Ross 1990; and Jervis 1976.
As with their colleagues in psychology, economists coveted scientific progress, which meant developing a causal theory of human behavior. Although marginal utility theory began by expressing in mathematical terms Bentham's view that the source of all human behavior is avoiding pain and seeking pleasure, economists believed that measuring hedonic states—my pleasure in consumption is two and a half times your pleasure—was an illusion.32
See Kahneman 2000; Lewin 1996, 1297–1308; and Rosenberg 1988, 66–68. Psychologists now study and measure hedonic notions of utility. See Mercer forthcoming.
Inferring preferences from behavior resulted in empirical and theoretical problems. Empirically, economists have found so little support for the axiom of revealed preference that it is “empirically useless.”34
Lewin 1996, 1316. See also Sen 1973, 243.
Rosenberg 1988, 72.
Lewin 1996, 1318.
See Rabin 2002, 678–79; and Rosenberg 1988, 75.
See Shiller 2000; and Tirole 2002. For an exception, see Rabin and Thaler 2001.
The rational actor approach of microeconomics grounds rational choice theory.40
Like behaviorists and neoclassical economists, Satz and Ferejohn accept the existence of psychological states but believe these states are unnecessary to explain human action; rational choice theory has a “remote connection to psychology.”41 If Skinner is correct that “thinking is behaving,” then as Riker argues, analysts can infer preferences from behavior: “Choice reveals preference because one can infer backwards from an outcome and a consistent process to what the goals must have been to get to that outcome.”42See Skinner 1974, 104; and Riker 1990, 174.
Sen 1973, 243–44.
Rosenberg 1988, 73–74.
Ibid.
Because rationalists reject the mind as causing behavior, the actor's environment carries the explanatory burden. Hardin advises that analysts should study the incentives that cause ethnic conflict, not the psychology of the participants.46
Hardin 1995, 8–9.
Achen and Snidal 1989, 164.
Operant conditioning provides many of the same basic predictions as microeconomics; see Alhadeff 1982.
Rational deterrence rests on the manipulation of incentives, which are properties of individuals, not environments. Thus what is rational for me to do depends on what inferences I believe you will make from my actions, and this cannot be specified in the abstract. How you respond to my threats or promises depends on your image of me and of yourself.51
Treating beliefs as exogenous can be sensible but does not solve the problem of inference: What will you infer from my behavior? Relying on Bayesian updating works well when the new information is objective and subject to one interpretation: I'll recognize a poker chip as red even if I was expecting to see a green chip and will update my belief about the probable distribution of red and green chips in a bag. However, in most cases that interest students of international politics, new information is ambiguous, subject to multiple interpretations, and interpreted through one's beliefs, expectations, and desires. As Jervis has emphasized, people do not simply revise their preexisting beliefs based on new evidence, but they use their preexisting beliefs to interpret that evidence.52Jervis 2002, 307.
The problems with folk psychology led behaviorists, neoclassical economists, and rational choice theorists to rely on logic, deduction, and observable behavior. Without denying the existence of the mind, scholars in these traditions view the mind as exogenous to the study of rationality. Yet incentives depend on beliefs and desires, revealed preferences cannot be causal, and making preferences (or desires) exogenous still requires knowledge of beliefs and action to assess rationality. Analysts cannot hope to exclude the mind from rationality if their explanations of rational behavior depend on the mind. Eliminating the mind from one's explanations would be an enormous accomplishment; inserting the mind by assumption is not. Of course, not all approaches to rationality depend on folk psychology. Often statistics or probability theory provides correct (or rational) answers to problems. As I discuss in the next section, this approach to rationality dominates much of the social psychology literature on judgment and helps to explain why political psychologists, like rationalists, believe that psychology explains mistakes.
Political Psychology and Mistakes
Whereas neoclassical economics still dominates economics departments, the 1960s' cognitive revolution overthrew behaviorism in psychology and reintroduced the mind as key to explaining not behavior, but mistakes. Bruner introduced cognition to psychology with his research on how motivation, such as expectations or desires, influence perceptions.53
This research led to the “New Look” in social psychology, which flourished in the 1950s. Bruner and Goodman 1947.
Tversky and Kahneman's “biases and heuristics” approach to judgment demonstrates that people systematically deviate from rational standards.56
See, for example, Kahneman, Slovic, and Tversky 1982; and Kahneman and Tversky 2000.
Tversky and Kahneman 1974, 1130.
Funder 1992. For other critiques, see Gigerenzer 1991; Gigerenzer et al. 1999; Krueger 1998; Funder 1987; and Krueger and Funder forthcoming. See the response by Kahneman and Tversky 1996; and the open peer commentary to Todd and Gigerenzer 2000.
The biases and heuristics research program encouraged political psychologists to focus on procedural errors that led to mistaken judgments. However, extending this approach to study decision making in international politics results in two problems. The first problem is identifying correctly the procedural error that led to the mistaken judgment. Analysts can attribute bad decisions to the need for cognitive consistency, improper assimilation of new data to old beliefs, the desire to avoid value trade-offs, groupthink, idiosyncratic schemas, motivated or emotional bias, reliance on heuristics because of cognitive limitations, incorrect use of analogies, the framing of information, feelings of shame and humiliation, or a miserable childhood.59
For cognitive consistency and assimilation, see Jervis 1976; tradeoffs, see Lebow 1981; groupthink, see Janis 1982; idiosyncratic schemas, see Goldgeier 1994, and Larson 1985; motivated biases, see Lebow and Stein 1987; heuristics, see Mintz et al. 1997, and Simon 1982; analogies, see Houghton 2001, and Khong 1992; frames, see McDermott 1998; humiliation, see Steinberg 1996; and childhood, see George and George 1964, and Post 2003.
The second problem is identifying the mistaken judgment. Was President George W. Bush mistaken to believe that Iraqi President Saddam Hussein posed an imminent threat to the United States in spring 2003? If Bush was correct, then analysts do not need to bother with heuristics (unless he was right for the wrong reasons).60
Rosen argues that Kennedy adhered to a nonnormative decision-making process during the Cuban Missile Crisis. Rosen 2004.
Jervis 1976, 7.
A “frequentist” rejects the Bayesian view of probability as a subjective measure of belief, and instead uses the relative frequency of repeatable events to explain probability. See Gigerenzer 1991; and Hammond 1996.
When logic yields multiple answers to the same problem, analysts must look elsewhere, such as to folk psychology, to assess rationality. Assessing the rationality of a decision-making process demands that three questions be addressed: What did the actor want, believe, and do? After establishing an actor's desires, beliefs, and actions, analysts can address how well the actor adhered to the rules of inference.63
Jervis 1976, 119.
Cognitive biases and emotion can be sources of mistakes, and understanding how cognitive biases and emotion subvert rationality is important. Yet the biases and heuristics research design presupposes unique solutions to statistical problems. In the absence of unique solutions, both rationalists and political psychologists must first establish an actor's desires and beliefs before judging behavior as rational. Only with the help of cognitive dissonance theory can one acknowledge that rationality depends on mental phenomena while simultaneously disparaging mental phenomena as a source of irrationality. Because rationality depends on psychology, psychology must do more than explain mistakes.
Psychology and Rational Baselines
The belief that psychology explains mistakes implies that psychological explanations require rational baselines. Analysts must know what is rational before they can know what is not rational. Rational models provide standards that, ceteris paribus, explain how one should reason and what constitutes a rational judgment. For example, because of misconceptions about chance, people believe that a coin toss is more likely to result in the pattern H-T-H-T-H-T than in H-H-H-T-T-T.64
This belief is a mistake. A coin does not have a memory and probability theory tells one what to expect. If it turns out that a particular coin is more likely to land H-T-H-T-H-T, then the problem is with the coin or the toss, but not with the laws of probability. Similarly, rationalists argue that analysts can understand the influence of psychology as a cause of war only after establishing an “ideal” explanation of how rational actors should behave.65See Fearon 1995; and Goemans 2000.
For example, rational reputation models advise decision makers that reputations form predictably and systematically, and are worth defending.66
Common sense, formal models, and logic tell rationalists how reputations form and thus how rational actors should behave. What is striking about the rational reputation literature is that it should even exist. Dictionaries define a reputation as a judgment about someone's character, yet character or disposition is psychology's home turf. Riker understood that it makes “no sense in terms of rational choice theory to attempt to attribute consistent character traits to actors.”67Riker 1995, 37.
Moore 1999, 64–65. This behaviorist explanation works only if one takes the characteristics of glass for granted; a rock dropped on a sheet of steel has a different effect. Jervis, personal communication.
For example, Riker 1962, 7.
See respectively, Keohane 1984; Ostrom 1998; and Fearon 1995.
First, eliminating psychology from a psychological concept also eliminates its explanatory power. For example, some rationalists define reputation as a judgment about someone's situation.71
See Sartori 2002, 136; and Downs and Jones 2002, 101.
Second, an analyst's reliance on unacknowledged psychological assumptions can cause mistakes. Just as behaviorists used terms such as “drive” to replace ordinary terms such as “desire,” rationalists use elaborate definitions of reputation to evade psychology. For example, a reputation becomes a belief about the probability that another is a particular “type.” A “type” is a player's private information about its expected utility for particular outcomes. Its expected utility is its payoff for a particular outcome, and observers infer probable payoffs from behavior.73
Morrow 1994, 55, 219, 242.
Ibid., 281.
Psychologists once thought they could type personality the way one types blood, but this view gave way to mounting evidence that a person's behavior correlates modestly from situation to situation. In the 1960s, psychologists suggested that subjective understandings of situations explained responses to those situations.75
Mischel 1999, 413–16.
Ibid., 429. See also Jervis 1976. For a review of personality psychology, see Funder 2001.
See Press forthcoming; Snyder 1997; Mercer 1996; and Hopf 1994.
The two observations above—that psychological concepts drained of psychology lose explanatory punch and that neglect of psychological assumptions can result in mistakes—reinforce a final point: no reason exists to privilege rationalist over psychological explanations. A correspondence approach and a coherence approach to rationality can yield competing means to the same desired end, and no principled reason exists to prefer one over the other. For example, Mercer argues that because reputations for resolve form rarely, rational actors should not fight wars over reputation.78
Mercer 1996. Also see Press forthcoming; Snyder 1997; and Hopf 1994.
For psychological approaches to finance, see Shleifer 2000; for deterrence, see Jervis 1984, and Lebow 1987.
Even on grounds of parsimony and generalizability, no reason exists to privilege rationalist explanations over psychological ones.80
See Verba 1961; and Kahler 1998.
Rabin 2002, 673–74.
For the logical but absurd consequences of adhering to expected utility theory, see Rabin 2000. For further discussion and an alternative, see Rabin and Thaler 2001; and Camerer and Thaler 2003.
Emotion and Accurate Judgments
The belief that psychology explains mistakes implies that psychology cannot explain accurate judgments. Scholars typically view cognitive biases and emotion as undermining rationality, which is why rationality is thought to be free of psychology and why psychology is thought to require a rational baseline. By eliminating the mind, analysts can determine what constitutes rational behavior. This view is mistaken: rational models typically rely on psychological assumptions; rational baselines need not precede psychological explanations; and, as I argue in this section, nonnormative (or nonrationalist) decision-making processes can provide the best means to a given end. I use emotion to demonstrate the value of a correspondence approach to rationality.
Political scientists generally treat emotion as distinct from, and ideally subordinate to, rationality.83
For exceptions, see McDermott forthcoming; Marcus 2003; and Crawford 2000.
Skinner 1976, 92.
See respectively, Becker 1976 and Ostrom 1998; Frank 1988 and Hirshleifer 1993; and Elster 1989 and 1999.
Frank 1988, 6.
Developments in neuroscience—such as brain imaging techniques that allow scientists to examine emotion in the brain—have contributed to the return of emotion to psychology and the introduction of neuroscience to economics.87
See Glimcher 2003; and Wall Street Journal, 15 November 2002, B1.
Ibid., xi.
Ibid., 193.
Damasio's research on the brain is not a parlor trick. People without emotion may know they should be ethical, and may know they should be influenced by norms, and may know that they should not make disastrous financial decisions, but this knowledge is abstract and inert and does not weigh on their decisions. They do not care about themselves or about others, and they neither try to avoid making mistakes nor are they capable of learning from their mistakes. They “know” but cannot “feel” and, consequently, they make mistakes.91
Ibid., 45.
If rationality depends on emotion, then correspondence models of rationality may have an advantage over coherence models. In this case, idealized (and emotionless) models of decision making may be a source of mistakes. For example, the rational choice of defection in a PD game, or in a collective action problem, is normatively self-defeating (because it reliably fails to maximize expected utility) and positively wrong (because cooperation is common). Since the mid-1970s, experimentation on PD and strategically equivalent games (such as commons dilemmas) has invariably found what one researcher calls “rampant cooperation.”92
Colman 2003, 147.
See Colman's discussion of PD, and of Matching, Centipede, and Ultimatum games. Colman 2003.
For the neuroscience of cooperation, see Rilling et al. 2002. For a review of evolutionary models of cooperation and reputation, see Fehr and Fischbacher 2003. For neuroscience and political psychology, see Cacioppo and Visser 2003. For the use of neuroscience to examine decision making and war termination, see Rosen 2004.
Collective action becomes a problem when the benefits of noncooperation are greater than the benefits of cooperation with the group, unless no one else cooperates with the group, in which case the costs of noncooperation exceed the costs of cooperation.95
If incentives encourage people to be selfish, but collective selfishness undermines desired outcomes or important societal values, how do people solve these problems? Changing the payoffs is one solution, but this abolishes rather than solves the problem.96See Dawes 1980; and Taylor 1987.
Trust is important to solving collective action problems, and I suggest that emotion is the basis of trust. As noted by philosophers, trust is a feeling of optimism in another's goodwill and competence.98
See Jones 1996; Becker 1996. See also Tyler 1998; Tyler and Degoey 1996; and Uslaner 1998.
See Putnam 2000, 136; and Larson 1997, 20.
Oatley 2000, 90.
For feelings as evidence, see Clore and Gasper 2000, 25.
Rationalists do to trust what they do to reputation.103
See Fearon 1995; and Kydd 2000.
Becker 1996, 47.
See Williamson 1994; and Granovetter 1985.
Jones 1996, 14.
I argue that identity produces emotion that creates trust, which then solves collective action problems. Social identity theory (SIT) dominates psychological approaches to group—not individual—behavior, and explains emotion's role in identity.107
For a review of SIT, see Brown 2000.
Tajfel and Turner 1986. Putting the group in the individual avoids the problem of aggregating preferences (which in most cases cannot be done rationally). See Arrow 1963.
For emotion in identity, see Gries 2004. For neglect of emotion in SIT, see Mercer 1995.
Tajfel 1978, 63 (emphasis in the original).
The emotion in identity provides the basis for trust. The psychologist Brewer observed that group members “uniformly evaluate members of their own group as high relative to those from other groups on characteristics such as trustworthiness, honesty, and loyalty. This consistency in the content of intergroup perceptions has led to the speculation that, in addition to serving functions related to positive personal identity, in-group bias also serves to solve a pervasive social dilemma—that of the problem of interpersonal trust.”112
Brewer 1981, 351–52 (emphasis in the original).
Brewer 1999, 438.
See Messick et al. 1983; and Tyler and Dawes 1993.
Orbell et al. 1988. For emotion's influence on out-group discrimination in minimal group settings, see DeSteno et al. 2004.
Because identity demands discrimination, relying on identity and trust to solve collective action problems is a double-edged sword. Tajfel singled out the esteem emotions as the driving force behind competition. It is the feeling of self-esteem, or pride, that makes people in groups view their group as different and better than other groups. In general, the more positively one feels about one's group, the more negatively one feels about rivalrous groups.118
In-group trust does not require out-group distrust—which is a feeling of pessimism about another's goodwill and competence—but it does require one to distinguish between trusting one's group and not trusting an out-group.119For distrust, see Jones 1996. For trust and out-groups, see Brewer 1999; and Brewer and Brown 1998, 575.
See Hartung 1995; Ridley 1997; and Wilson 2002.
Brewer 1997, 201.
Emotion drives in-group cooperation and out-group discrimination. If this perspective is correct, then an emphasis on material incentives, abstract cognitive beliefs, or incomplete information is mistaken.122
Stuart Kaufman reaches similar conclusions in his study of ethnic conflict. Kaufman 2001.
See Adler 1997, 255, 263; and Wendt 1999, 319.
Exceptions include Kier 2004; Finnemore 2003; Crawford 2000; and Finnemore and Sikkink 1998.
A correspondence approach to rationality ought to be better than a coherence approach when the explanation for a phenomenon rests on psychological concepts such as trust, identity, or reputation. For example, “justice” depends on emotion. As Solomon argued, “The idea that justice requires emotional detachment, a kind of purity suited ultimately only to angels, ideal observers, and the original founders of society has blinded us to the fact that justice arises from and requires such feelings as resentment, jealousy, and envy as well as empathy and compassion.”125
Solomon 1990, 34.
See Rabin 2002; and Frieden 1999.
See Tyler and Degoey 1996; Rabin 2002; Camerer and Thaler 2003; and Kier 2004. See also, for experimental work on the Ultimatum game, Colman 2003; and Henrich et al. 2001.
See Welch 1993.
Recognizing emotion's central role in justice has policy implications. U.S. forces in Iraq have accidentally killed thousands of Iraqi innocents. Their family members demand justice. A rationalist solution might emphasize monetary compensation, because rational people are self-interested and have preferences over outcomes. A psychological solution would attend to process: expressions of regret, recognition that a grave injustice occurred, and acknowledging changes in procedure to make accidental deaths less likely. Addressing injustice in Iraq depends on whether Iraqis are driven more by greed—better known as self-interest—or, for example, by anger.129
Philosophers in the early seventeenth century began to view greed as so powerful that, when harnessed with reason, it might tame the other passions. Thus began the semantic shift from the emotion “greed” into emotionless (and virtuous) “interest.” Hirschman 1977.
Elster suggests that because emotion undermines rationality, the only problems emotion solves are those it causes in the first place.130
Elster 1999, 291.
Conclusion
Apprehending rationality is hard, and it is made harder by the belief that psychology explains only mistakes. This belief results in three myths: that rational explanations are free of psychology, that psychological explanations demand a rational baseline, and that psychology cannot explain accurate judgments. To the contrary, rational models usually rest on psychological assumptions, a correspondence model does not depend on the existence of a coherence model, and psychology can explain accurate judgments. For example, psychological economists use loss aversion and mental accounting to prevent mistakes that expected utility theory causes, neuroscientists use emotion to explain rational behavior, political psychologists use attribution theory to explain reputation formation, and I use emotion in trust and identity to solve collective action problems. The point is not that correspondence models should replace coherence models, but that no single approach has a lock on understanding rationality. In some important contexts (such as in strategic choice) or when using certain concepts (such as reputation, trust, identity, or justice), a correspondence approach to rationality may beat a coherence approach.
Rejecting the belief that psychology explains only mistakes should help rationalists overcome their fear of “going mental” and thus encourage them to pay attention to the psychological assumptions that often drive their explanations. Rejecting the belief should also encourage political psychologists to stop conceding rationality to the rationalists. Political psychology is—or at least should be—as much about accurate judgments as inaccurate ones. Finally, rejecting these beliefs makes it possible for rationalists and political psychologists to view emotion and cognition as contributing to rational behavior rather than only undermining it.
References
REFERENCES
- 188
- Cited by