R1. Dual representations are unnecessary
We begin our rejoinder with Pinker's criticisms, because they cut to the heart of our proposal. Pinker's first point is that self-deception must involve dual representations, with truth and falsehood simultaneously stored. We disagree. An individual can self-deceive by failing to encode unwanted information in the first place. The research of Ditto and his colleagues (e.g., Ditto & Lopez Reference Ditto and Lopez1992) provides the clearest example of this effect. By stopping their information search when the early returns were welcome (i.e., when the salivary test results suggested good health), participants in Ditto's experiments prevented themselves from ever encoding unwanted information that might have come along later. Thus, these individuals self-deceived without any representation of the truth.
Whereas Pinker proposes that the findings of Epley and Whitchurch (Reference Epley and Whitchurch2008) cannot be taken as self-deception unless people can be shown to have accurate knowledge of their own appearance, we argue that this is unnecessary, even at an unconscious level. We further suspect that people's knowledge of their own biases is typically limited, and thus people might often be blissfully unaware – at any level – of the impact of their biased processing on their self-views. It should be noted, however, that individuals who engage in biased encoding might occasionally have access to the possibility that unwanted information exists and that they avoided it. That is, they may have some representation of their own biased information gathering and its potential effect on the information they have in storage.
Smith agrees with Pinker that self-deception involves dual representations, but to Smith, this seems impossible. Smith then characterizes our mental dualism approach as constructing an imaginary internal homunculus, which he tells us has “numerous problems… not the least of which is its utter implausibility.” Needless to say, constructing an internal homunculus was not part of our program, and what he thinks is impossible, we think are everyday events.
In his thoughts on deflationism, Smith confuses us with Mele (Reference Mele2001); he reintroduces a poorly defined distinction between wishful thinking and self-deception; and he ends by accusing us of believing that self-deception is both intentional and unintentional. If someone else managed to miss the point, let us state it clearly – we certainly believe that self-deception is intentional, in the sense that the organism itself intends to produce the bias, although the intention could be entirely unconscious. That is, humans have been favored by natural selection to self-deceive for its facilitative effect on the deception of others.
Bandura appears to agree with our argument that avoiding the truth is a form of self-deception and thus that the truth need not be harbored even in the unconscious mind. But he argues that the deceiving self must be aware of what the deceived self believes in order to concoct the self-deception. As the literature on biased processing shows, however, people can avoid unwanted information through a variety of biases that need only reflect the deceiving self's goals. As is the case with the orchid, natural selection does not require that these goals be available to conscious awareness. Thus, Bandura's dual representation of goals is also unnecessary for self-deception.
Finally, Harnad is also concerned about representation, but he appears to believe that our proposal includes the notion of self-deceivers as unconscious Darwinian robots. We are not sure what gave Harnad this idea. If he is arguing that a theory of self-deception would benefit from a better understanding of consciousness, we agree.
R2. Happiness, confidence, optimism, and guilt are interpersonal
Pinker's second point is that we have diluted the force of Trivers' (Reference Trivers and Dawkins1976) original theorizing about self-deception by “applying it to phenomena such as happiness, optimism, confidence … in which the loop is strictly internal, with no outside party to be deceived.” We disagree with his characterization of happiness, optimism, and confidence as strictly internal. Instead, we regard all three of these states as being of great interpersonal importance, given that they signal positive qualities about the individual to others. Consider, for example, the effects of confidence on mating decisions. If you are considering entering into a relationship with a 25-year-old woman with low self-confidence, you may well reason (consciously or unconsciously) that she has known herself for 25 years and you have only known her for two weeks, so perhaps she is aware of important deficiencies that you have yet to discover. Similarly, an opponent's self-confidence is an important predictor of the outcome of an aggressive confrontation, and thus overconfidence can give a benefit to the degree that it induces self-doubt in the other or causes the other to retreat.
This possibility relates to Suddendorf's analysis of the time course of the evolution of self-deception. Although he provides a compelling description of when self-deception to facilitate lying might have evolved, his analysis is limited to forms of self-deception that support deliberate and conscious efforts to manipulate the mental states of others. As noted earlier in this Response, self-deception should also underlie more general efforts to portray the self as better than it is, with overconfidence being the classic example. Because the benefits of overconfidence do not appear to be unique to human confrontations, coalition building, and mating efforts, it seems highly likely that self-deception is much older than our human lineage.
Despite these important disagreements with Pinker, it is worth returning to his central point about the importance of external parties. His eloquent dismissal of the logic of self-deception for purely internal purposes refutes the arguments proposed by Egan and McKay, Mijović-Prelec, & Prelec (McKay et al.) that self-deception could have evolved for its adaptive value in the absence of any interpersonal effects. But that does not mean that their suggestions cannot be resurrected and restated in interpersonal terms. For example, McKay et al. argue that self-deception is self-signaling intended to convince the self of its own morality and thereby eliminate the guilt associated with deception of others. We agree that this is an interpersonal benefit of self-deception, because inhibition of guilt appears to make people more successful in deceiving others (Karim et al. Reference Karim, Schneider, Lotze, Veit, Sauseng, Braun and Birbaumer2010).
Egan's motivational argument is similarly resurrected by Suddendorf, with the suggestion that foresight biases might have evolved because people often need to convince others to cooperate with their goals, and one way to persuade others is to self-deceive about the emotional impact of the outcome. This is an excellent suggestion that applies the interpersonal logic of Trivers' (Reference Trivers and Dawkins1976) original idea regarding self-deception to the problem of self-motivation. If true, then affective forecasting errors should be greater when people must convince others to collaborate to pursue their goal than when the goal is a solitary pursuit. Finally, in a related vein, Mercier suggests that confirmation biases emerge not from self-deception but from the argumentative function of reasoning. But if reasoning evolved in part to serve an argumentative and thus persuasive function, we are brought back to our original proposal that people bias their information processing to more successfully convince others of a self-favorable view of the world.
R3. Self-deception is bounded by plausibility
We agree with McKay et al. that people will attempt to self-deceive but sometimes fail. We also agree that such failures will not stop people from trying again, but failures should guide their future efforts. In this sense, we are in complete agreement with Frey & Voland's proposal that self-deceptions are negotiated with the world; those that are challenged and shown to be false are no longer be believed by the self, and those that are accepted by others continue to be accepted by the self. This idea resonates with Brooks & Swann's identity negotiation process, and it is a likely route by which self-deception might achieve the proper dosage that we alluded to in the target article. But it is important to note that by engaging in a modicum of self-deception during identity negotiation, people are likely to enhance their role in the relationship and the benefits they gain from it. The end result of such a negotiation is that people self-deceive to the degree that others believe.
Frey & Voland go on to argue for a model of negotiated self-deception that appears to be consistent with our proposal (see also Buss). Lu & Chang extend their arguments by suggesting that self-deception should be sensitive to probabilities and costs of detection. Thus, people might self-deceive more when they think they have a more difficult interpersonal deception to achieve.
If self-deception is sensitive to interpersonal opportunities and pitfalls, we are brought to Gangestad's important point that this too is a co-evolutionary struggle, as people should be selected to resist the self-deception of others. Gangestad then asks whether self-deception might be most successful among those who have the most positive qualities and thus are the most believable when they self-enhance. In psychology we tend to think of individuals with secure high self-esteem as those who are comfortable with themselves, warts and all (Kohut Reference Kohut1977). But Gangestad's suggestion raises the possibility that secure high self-esteem might actually be a mixture of capability and self-deception. This perspective suggests that implicit self-esteem might correlate with self-enhancement in the Epley–Whitchurch paradigm because those who receive enough positive feedback to enable high implicit self-esteem may be best placed to convince others that they are better looking than they really are. That is, other aspects of their personality or capabilities (or perhaps even their physical attractiveness) might cause Epley and Whitchurch's self-enhancers to be highly regarded, and because they are highly regarded, other individuals are less likely to challenge their behavior when they act as if they are even more attractive than they are.
Frey & Voland attack this interpersonal story in their claim that costly signaling theory weakens the case for self-deceptive self-enhancement. Although we agree that costly signaling theory explains why perceivers place a premium on signals that are difficult to fake (e.g., honest signals of male quality such as physical size or symmetry), it does not follow from costly signaling theory that perceivers ignore signals that can sometimes be faked. Nor does it follow that people and other animals do not try to fake such signals when possible. One can thus infer that signals that can be faked – and are thereby viewed by receivers with a more jaundiced eye – will be more readily believed by others if they are also believed by the self. In this manner, self-deception can be accommodated within costly signaling theory (see also Gangestad).
If self-deception is negotiated with others, then one important outcome of this negotiation is that self-deceptions are likely to be plausible. This plausibility constraint has a number of implications, the first of which concerns Brooks & Swann's point that although the benefits we ascribe to confidence may be accurate, that does not mean that they also apply to overconfidence. Although Brooks & Swann are certainly correct – in the sense that overconfidence has attendant costs that do not accompany confidence – it is also the case that so long as it is not dosed too liberally, overconfidence should be difficult to discriminate from confidence and thus should give people an advantage in addition to their justified confidence.
Plausibility constraints also address Brooks & Swann's second argument that self-enhancement plays only a modest role in social interaction, in which they point to a meta-analysis that suggests that self-verification overrides self-enhancement. As a plausibility perspective makes apparent, this finding is not an argument against self-deception, but rather is consistent with the idea that the benefits of self-deception are dose-dependant. Reality is vitally important, and people ignore it at their peril. Thus, self-deception will be most effective when it represents small and believable deviations from reality. A well-tuned self-deceptive organism would likely be one that biases reality by, say, 20% in the favored direction (see Epley & Whitchurch Reference Epley and Whitchurch2008). Thus, self-verification strivings (i.e., an accuracy orientation) would account for 80% of the variance in people's understanding of the world and should thereby be at least as strong if not much stronger than self-enhancement strivings when strength is interpreted as variance accounted for. But if strength is interpreted as desire to know favorable information versus desire to know reality, then this desire should fluctuate as a function of current needs and opportunities.
Plausibility is also relevant to the suggestion made by Brooks & Swann, Bandura, and Dunning that infrequent self-deception and consequent deception of others may well be useful, but that excessive deception is likely to result in discovery and rejection. Discovery and rejection seem highly likely if people rely too regularly on deception as an interpersonal strategy; the threat of rejection from group members is one of the evolutionary pressures for telling the truth.
Dunning then conflates excessive boasting with self-deceptive self-enhancement, again overlooking plausibility constraints. This leads Dunning to conclude that self-deception might be more useful when we regularly interact with lots of strangers. Although this argument may hold for those who rely excessively on deception and self-enhancement, for those who practice deception in moderation, self-deception ought to have facilitated deception and self-enhancement even in (perhaps especially in) long-term, stable small groups. Thus, the fact that people evolved in small interdependent bands may be all the more reason for individuals to self-deceive on those occasions when they choose to or need to deceive others.
We see further perils of ignoring plausibility in the tacit assumption shared by some of our commentators that all deception must be accompanied by self-deception. For example, Vrij argues that the self-deceptive biases we describe in our proposal account for only a small portion of the vast number of lies that people tell in their everyday lives. Although that may well be true, there is an enormous ascertainment bias: We are much more aware of our conscious deceptions than we are of our unconscious deceptions. More importantly, the existence of self-deception as an interpersonal strategy does not preclude deliberate deception. Plausibility constraints ensure that not all deception is capable of benefiting from self-deception.
In this manner, plausibility constraints also inform future research. For example, Buss describes a series of interesting hypotheses about self-deception within families and between the sexes regarding one's feelings. Deceptions about internal qualities such as feelings are difficult to disprove and thus seem likely candidates for self-deception. In contrast, the personal ad study that Buss describes, with men overestimating their height and women underestimating their weight, seems like a less plausible candidate for self-deception. Although men and women might believe their own exaggerations to some degree, it seems highly unlikely that they are self-deceived about the full extent of their claims, given the ready availability of contradictory evidence. These sorts of exaggerations are likely to be rapidly dismissed when people begin to negotiate an actual relationship, and thus initial deception of others in the absence of self-deception is likely to be more common in cases such as these.
Humphrey argues that an important cost to self-deception is loss of insight into the deceit of others. He suggests that when we deceive, we learn that others do the same, and if we self-deceive, we lose that learning opportunity. We agree that this is a cost but one that is limited by the percentage of our deception accompanied by self-deception. Plausibility constraints ensure that most of us tell plenty of deliberate lies on which we can later reflect. Humphrey goes on to note that Machiavellianism might be considered the flip-side of self-deception. This is an interesting suggestion and raises the testable hypothesis that the more one adopts either of these strategies, the less likely one is to adopt the other. This too is relevant to plausibility, given that different abilities and proclivities will make people differentially successful when using these potentially competing strategies.
Mercier notes that because lies are often detected by a lack of internal consistency, self-deception would facilitate lie detection rather than other deception, insofar as self-deceivers could no longer maintain internal consistency. But it is easy to imagine how plausibility constraints produce the necessary consistency in self-deception. Indeed, the lies that we tell others that are based on biased information processing may be just as internally consistent as the lies we tell knowingly, maybe even more so, because we do not need to worry about mixing truth and lies when we self-deceive. For example, if I bias my information gathering in the manner described by Ditto and Lopez (Reference Ditto and Lopez1992), then all the information at my disposal is internally consistent and supportive of the fact that I do not have a potential pancreatic disorder. Unbiased information gathering would have only put me in the potentially awkward position of learning that I do have the potential for the disorder, and then being forced to cover up that information.
Although the need to believe one's own self-deceptions increases the likelihood that they are internally consistent, it does not follow that we are our own fiercest critics, as Fridland suggests. That is, our ability to detect deception in others does not necessarily translate into detection of deception in ourselves. Furthermore, it does not follow that if we are better at detecting lies in close others, then we should be best in detecting them in ourselves. The flaw in Fridland's reasoning is that the motivation to detect deception is opposite in self versus other deception; we are strongly motivated to detect the latter but not the former. Indeed, the world is replete with individuals who are intolerant of behaviors in others that they excuse in themselves (Batson et al. Reference Batson, Kobrynowicz, Dinnerstein, Kampf and Wilson1997; Valdesolo & DeSteno Reference Valdesolo and DeSteno2008). By Fridland's logic, such hypocrisy should be impossible. Likewise, we do not see the self as simply the endpoint of a continuum of familiarity from strangers to close friends to the self – rather, the self is qualitatively different.
R4. Cultural differences are only skin deep
Heine disagrees with our claim that self-enhancement is pan-cultural, cites his meta-analyses that support his position that there is no self-enhancement in East Asian cultures, and dismisses the evidence and meta-analyses that are inconsistent with his perspective. Most of these arguments are a rehash of his debate with Sedikides, Brown, and others, and because these authors have already addressed the details of Heine's claims, we refer interested readers to their thorough rebuttals rather than devote the necessary ink here (e.g., see Brown Reference Brown2010; Sedikides & Alicke, in press; Sedikides & Gregg Reference Sedikides and Gregg2008). It is important to note, however, that Heine has missed the bigger picture regarding the purpose of self-enhancement. From an evolutionary perspective, the critical issue is not that self-enhancement is intended to make the self feel better about the self, but rather that it is intended to convince others that the self is better than it really is. Self-enhancement is important because better selves receive more benefits than worse selves. Better selves are leaders, sexual partners, and winners, and worse selves are followers, loners, and losers, with all the social and material consequences of such statuses. Because East Asians also win friends, mates, and conflicts by being better than their rivals, we expect that they will gain an edge in coalition building, mating, and fighting by self-enhancing (just as Westerners do). But because different cultures have different rules about what it means to be moral and efficacious, as well as different rules about how best to communicate that information, it also follows that there should be cultural variation in self-enhancement.
By virtue of their collectivism, cultures in East Asia place a premium on harmony and fitting in with others, and thus modesty is an important virtue. As a consequence, East Asians are far less likely than individualist Westerners to claim to be great or to act in ways that suggest they believe they are great, given that immodesty itself provides direct evidence in East Asia that they are not so great after all. The importance of modesty in collectivist cultures raises the possibility that East Asians may self-enhance by appearing to self-denigrate – by exaggerating their modesty. That is, humble claims by East Asians could be made in service of self-enhancement. Consistent with this possibility, Cai et al. (Reference Cai, Sedikides, Gaertner, Wang, Carvallo, Xu, O'Mara and Jackson2011) found that among Chinese participants, dispositional modesty was negatively correlated with explicit self-esteem but positively correlated with implicit self-esteem (measured via the IAT [Implicit Association Test] and preference for one's own name). In contrast, among North American participants, dispositional modesty was negatively correlated with explicit self-esteem and uncorrelated with implicit self-esteem. Indeed, when Cai et al. (Reference Cai, Sedikides, Gaertner, Wang, Carvallo, Xu, O'Mara and Jackson2011) instructed Chinese and North American participants to rate themselves in either a modest or immodest fashion, they found that immodest self-ratings reduced implicit self-esteem and modest self-ratings raised implicit self-esteem among Chinese participants but had no effect on North American participants. These results provide evidence that modesty can itself be self-enhancing, and just as important, that modesty demands will by necessity minimize explicit self-enhancement in East Asian cultures.
Nevertheless, these results do not provide clear evidence that East Asians believe they are better than they are (as we claim they should), because implicit measures are not well suited for demonstrating such a possibility. The Epley–Whitchurch (Reference Epley and Whitchurch2008) paradigm is well suited for demonstrating that people believe they are better than they are. We expect that the Epley–Whitchurch paradigm would reveal self-enhancement just as clearly in East Asia as it does in the West. Furthermore, particularly strong evidence for our argument regarding the impact of modesty on claims versus beliefs would emerge if Easterners chose their actual or even an uglified self when asked to find their self in an array of their own real and morphed faces (Epley & Whitchurch, Reference Epley and Whitchurch2008, Study 1) but nevertheless found their attractive self more quickly in an array of other people's faces (Epley & Whitchurch, Reference Epley and Whitchurch2008, Study 2). Such a pattern of results would speak to the value of “claiming down” while “believing up” – or the co-occurrence of modesty and self-enhancement in collectivist cultures. Despite the allure of such a finding, it may be the case that the Epley–Whitchurch paradigm is too subtle for most people to use to demonstrate modesty, and thus East Asians may show self-enhancement in both explicit choices and reaction times. Heine would presumably predict that self-enhancement would not emerge in East Asia with either of these measures.
In a similar vein, Egan suggests that if self-deception serves deception of others, then people should be particularly likely to deceive themselves about moral issues, because getting others to believe one is more moral would cause others to “lower their guard.” In apparent contrast to this prediction, Egan notes that Balcetis et al. (Reference Balcetis, Dunning and Miller2008) found that collectivists self-enhance less in moral domains than do individualists. Unfortunately, Balcetis et al.'s study does not address Egan's point. Balcetis et al. asked people to make judgments regarding moral behaviors in which they might engage, and they found that collectivists were less likely than individualists to overestimate their tendency to engage in moral behaviors. But what does this finding mean with regard to self-deception? Are collectivists simply self-enhancing less on overt measures, as has been found many times in many domains (see Heine)? Or are collectivists more attuned to their own interpersonal behavior, and thereby more aware of their actual likelihood of engaging in a variety of interpersonal moral acts? It is unclear from the Balcetis et al. studies whether collectivism is truly associated with self-deceptive self-enhancement in moral domains, nor is it clear what such a finding would mean with regard to the evolution of self-deception.
R5. There is a central self and it's a big one
Guided by data from cognitive and social psychology that indicate that self-knowledge is too vast to load into working memory, Kenrick & White suggest that content domains determine which subselves are activated and guide information processing. This view leads Kenrick & White to redefine self-deception as selectivity. Although we agree that only certain aspects of the self are activated at any one time, we disagree with their proposed implication of this issue for self-deception. As Kenrick & White note, selectivity involves gathering, attending to, and remembering only that which is important to the active subself (or working self-concept; Markus & Wurf Reference Markus and Wurf1987). But people do not simply gather, label, and remember that which is selectively important to the active subself. Rather, people also gather, label, and remember information that is biased in favor of their current goals. Good news and bad news are equally important to the active subself, but self-deception selectively targets the good news – the better to persuade others that the state of the world is consistent with the goals of the self-deceiver.
In response to a similar set of concerns, Kurzban attempts to sidestep the problem of self-deception by deleting the concept of the “self” in favor of multiple, independent mental modules. In our opinion this is an error, as data from social psychology and cognitive neuroscience suggest otherwise. For example, the finding that brain regions involved in executive control can inhibit the activity of other brain regions (e.g., Anderson et al. Reference Anderson, Ochsner, Kuhl, Cooper, Robertson, Gabrieli, Glover and Gabrieli2004) suggests that there is a central self and that this central self has (limited) access to and control of information processing.
Kurzban also wants to avoid metaphors such as level of consciousness, and he argues instead that modularity provides an easy solution to the problem of self-deception. On the surface these arguments seem compelling, given that self-deception in a modular system seems simple, almost inevitable. Problems emerge, however, when we take the modularity metaphor seriously. If we accept the idea that the mind is composed of isolated modules, we are led to a question similar to that raised by Bandura: Which module directs behavior when two (or more) modules are equally relevant? Without a central self that controls, activates, and inhibits other systems, modules would continually be captured by external events in a manner that would disrupt sustained goal pursuit.
Perhaps more importantly, if a modular system shows functional specificity and information encapsulation (Kurzban & Aktipis Reference Kurzban, Aktipis, Schaller, Simpson and Kenrick2006), why do systems that should logically be distinct leak into each other? For example, why does hand washing reduce cognitive dissonance (Lee & Schwarz Reference Lee and Schwarz2010); why does touching a hard object make people more rigid in negotiations (Ackerman et al. Reference Ackerman, Nocera and Bargh2010); why does holding a warm cup make someone else seem friendlier (Williams & Bargh Reference Williams and Bargh2008); and why do people who were excluded feel physically cold (Zhong & Leonardelli Reference Zhong and Leonardelli2008)? Embodiment research demonstrates that modules, if they exist, are neither functionally distinct nor autonomous. Furthermore, the notion of levels of consciousness might be less metaphorical than the notion of modularity, given that research dating back to Ebbinghaus (Reference Ebbinghaus, Ruger and Bussenius1885) has shown that some information can be consciously recollected, other information can be recognized as accurate even though it was not recollected, other information leaves a residue in consciousness even if the information itself is unconscious (e.g., the sense of familiarity – Jacoby Reference Jacoby1991), and still other information is entirely unconscious but nevertheless drives behavior in a manner that is also outside of conscious awareness (Kolers Reference Kolers1976). Thus, the concept of modules allows us to escape neither the concept of the self nor levels of consciousness, leaving us to conclude that modularity does not provide an easy solution to the problem of self-deception after all.
Nevertheless, the existence of subselves can lead to competing goals at different levels of consciousness. This possibility leads Huang & Bargh to argue that unconscious pursuit of goals that are inconsistent with conscious desires is a form of self-deception. We agree that these dual systems of goal pursuit enable self-deception, and some of the examples they cite support such an interpretation. We disagree, however, that such deviations from conscious behavioral control are necessarily varieties of self-deception. Rather, these inconsistencies are evidence of competing goals that may or may not involve self-deception. For example, if a person previously held a certain attitude and then was persuaded to the opposite position, the original attitude might remain in an unconscious form and might continue to influence some types of goal-directed behavior (Jarvis Reference Jarvis1998). This sort of slippage in the mental system facilitates self-deception, but it can emerge for non-motivational reasons as well, such as habit.
Although there is now substantial evidence for such dissociations between conscious and unconscious processes, Bandura argues that interconnections between brain structures suggest that mental dualisms of this sort are unlikely. It does not follow from rich neural interconnections, however, that people have conscious access to information that is processed outside of awareness. Conscious access is clearly limited, although as noted earlier, there is commerce between conscious and unconscious mind. Bandura goes on to question how the contradicted mind can produce coherent action. The Son Hing et al. (Reference Son Hing, Chung-Yan, Hamilton and Zanna2008) experiment provides an answer to this question: This study shows that competing conscious and unconscious attitudes influence behavior under predictable circumstances (see also Hofmann et al. Reference Hofmann, Friese and Strack2009). It should be kept in mind, however, that attitudinal ambivalence can also be reflected in consciousness, and thus people often knowingly behave in self-contradictory ways.
R6. Self-deception has costs
Funder raises two important issues. First, he asks why people would ever self-deceive in a downward direction. Despite the fact that most people self-enhance, some people self-diminish. If we accept the possibility that these individuals are as likely as self-enhancers to believe their self-diminishing claims, the question emerges whether self-diminishment may be favored by natural selection, and if so, why?
We believe that there are two major sources to “deceiving down.” On the one hand, it is sometimes directly adaptive. In herring gulls and various other seabirds, offspring actively diminish their apparent size and degree of aggressiveness at fledging so they will be permitted to remain near their parents, thereby consuming more parental investment. In many species of fish, frogs, and insects, males diminish apparent size, color, and aggressiveness to resemble females and steal paternity over eggs (see Trivers Reference Trivers1985). These findings indicate that deceiving down can be a viable strategy in other species, and thus likely in humans as well, which should lead to self-deceptive self-diminishment. An important question would be to identify domains in which different types of people gain by self-diminishing.
On the other hand, people may also be socialized or otherwise taught that they are less capable, moral, or worthy than they really are. If acceptance of this negative message leads to self-diminishment biases, then perhaps such individuals' self-views represent an imposed variety of self-deception, whereby the individual self-deceives in a direction that benefits someone else, such as a parent, spouse, or dominant group (Trivers Reference Trivers, Kappeler and Silk2009). If so, this would appear to be an important cost to self-deception that may be borne by a substantial percentage of the population.
Related to this issue, Funder's second point is that we make only passing reference to the costs of self-deception and focus almost exclusively on the associated gains. We plead guilty to this charge, because our goal was to establish why self-deception might have evolved, and that goal required attention to possible benefits. The costs of self-deception – particularly those regarding loss of information – seem apparent and real, and thus it is important to establish the benefits that select for self-deception in the first place. Of course, any mature evolutionary theory of the subject must be based on a cost/benefit analysis, and we acknowledge many of the costs that have been suggested.
For example, Mercier suggests that if we do not know that we lied, then we cannot apologize and make amends (see also Khalil). This is true, and it is a cost of self-deception. However, this cost must be weighed against the gain achieved by appearing less culpable when lying. Ignorance is typically a lesser crime than duplicity, but an apology may sometimes offset or even outweigh that benefit.
A different type of cost is raised by Johansson, Hall, & Gärdenfors (Johansson et al.), who suggest that people might accidentally convince themselves in their self-deceptive efforts to convince others, and thus they might not retain any representation of the truth. In support of this possibility, Johansson et al. discuss choice blindness, or the frequent inability of people to notice that their preferred option was switched after they made a choice. In their experiments on choice blindness, Johansson et al. found that people justified their non-chosen outcome just as vociferously as their chosen one, which suggests that they do not know why they chose as they did. Most intriguingly, Johansson et al. also describe evidence that these justified non-choices later become preferred options, suggesting that people have convinced themselves of their new preferences.
We agree that this is a possible cost and see it as another example of why simultaneous representation of truth and falsehood is not necessary for self-deception. Nevertheless, there may be some types of self-deception where the truth is lost and other types where the truth is retained and can later be accessed when the deceptive deed is done (Lu & Chang). This is a provocative possibility, and there is evidence to suggest that such a system might operate at least some of the time. For example, Zaragoza and colleagues (e.g., McCloskey & Zaragoza Reference McCloskey and Zaragoza1985) have found that suggestive probes will cause people to recall false information in line with the suggested information. When individuals are later given a recognition task, however, they remain capable of recognizing the accurate information that they were originally presented. Thus, the possibility remains that memory might be distorted in favor of a self-deceptive goal and then revert back to reality when the deception ends, which would minimize the costs outlined by Johansson et al.
Frankish describes a cost similar to that of Johansson et al. in his argument that people often accept a claim as true for the purpose of exploring it or taking a certain perspective, despite knowing that it is false. Frankish proposes that this system has some slippage, as people can come to believe what they originally only accepted, perhaps by observing their own behavior. Acceptance appears to be another avenue by which people could end up self-deceiving in their efforts to deceive others. Particularly if it proves to be the case that people are more likely to adopt an accepting role when faced with congenial arguments, Frankish's suggestion would seem to provide another route for self-deception in service of offensive goals.
The notion of costs takes a different turn in the arguments put forward by Khalil, who tells us that in claiming that self-deception has evolved, we are implicitly assuming that it is “optimal.” Unfortunately, he does not tell us what he means by this term or why we should be tagged with believing it. Apparently our argument that self-deception may be a tactic in deceiving others commits us to the notion that self-deception is optimal. We know of no possible meaning of “optimal” for which this is true, but we are not trained in economics. In biology and psychology, we do not believe that evolution generates optimality very often (if at all). Nor do we believe that self-deception is optimal, in the sense of not being able to be improved upon. Khalil argues that if he can show that self-deception is suboptimal, then our theory collapses, but he offers no logic to support this claim. Economics apparently produces theorems claiming that suboptimal solutions will be swept aside by economic forces, but we doubt the validity of such theorems even within economics, much less evolutionary biology and psychology.
We do agree that Adam Smith had some very insightful things to say about self-deception, that open confession and plea for forgiveness may often be preferable to self-deception, and that self-deception can sometimes be costly. But unlike Khalil, we believe that selection can favor individuals who fight their tendency to self-deceive in some circumstances while at the same time practicing it in others.
R7. It remains unclear how well people detect deception
Vrij's major point is that people are in fact poor lie detectors, and he claims that this conclusion is supported by the various experiments that he cites. We examine the details of this claim in the following paragraphs, but let us note at the outset that every one of the studies Vrij cites (as well as those cited by Dunning on this point) suffers from the same methodological limitations we discuss in our target article. Thus, none of the additional data that are raised address the criticisms made in the target article.
First, Vrij argues that experiments advantage detectors rather than deceivers because detectors know they might be lied to. Although this is true, most situations that motivate lying in everyday life also raise awareness that one might be lied to (as Humphrey notes). Second, Vrij argues that intimacy increases perceptions of truthfulness. Although this appears to be true much of the time, it does not follow that intimacy decreases accuracy in detection of deception. Rather, intimacy could lead to a stronger perceived likelihood of truthfulness (beta in signal detection terms) and simultaneously better discriminability (d′). The lack of evidence for better lie detection in friends over strangers described by Vrij speaks more to the methods used than to the answer to this question, because these studies suffer from the same limitations as the ones we reviewed. Third, the Hartwig et al. (Reference Hartwig, Granhag, Strömwall and Kronkvist2006) study raised by Vrij is interesting, but it does not speak to the questions that concern us about whether cross-examination helps people detect deception. Again, the lies told in Hartwig et al. were trivial, and thus cross-examination will not necessarily increase accuracy. Fourth, Vrij's own research on enhancing cognitive load is also interesting, but it is easy to see how his work supports our position that cross-examination helps detectors, because cross-examination also enhances cognitive load on the deceiver. The fact that Vrij can demonstrate effects of cognitive load with trivial lies provides further evidence that lie detection remains an important threat to liars. Fifth, Vrij raises the fact that Park et al.'s (Reference Park, Levine, McCornack, Morrisson and Ferrara2002) research shows that detection of lies in daily life relies almost entirely on third parties and physical evidence, and not on nonverbal behavior. As Vrij notes, however, 14% of the lies reported in this research were detected via confessions. Importantly, these were solicited confessions based on confrontations about suspicious behavior and the like. Additionally, the methodology in this study was based on recall, which is not only faulty but also influences the type of lies people may choose to disclose to the experimenters. Last, participants in the Park et al. study were simply asked about a recent lie they discovered, and thus as with most previous research, many of the lies they detected were likely to be trivial. Selection pressure should be heaviest on important lies, and thus we reiterate our claim that research is necessary to examine the detection of important lies in vivo.
Finally, Dunning raises a related possibility that people might do a better job deceiving if they do not care about the truth. This argument is plausible for deceptions that have few if any consequences for the deceiver (and we agree that “bullshitting” is a case in point), but the more important the deception is, the more people are forced to care about the truth because its discovery will cause them harm. Thus, this possibility seems unlikely for important deceptions.
R8. Self-deceivers are liars, whether they know it or not
Fridland and Vrij both argue that self-deceivers are not lying (although Fridland's claim is stronger, in that she states that they are not even deceiving). This is true in the strictest sense of what it means to lie, but untrue once we understand that deception of others is the motivation for self-deception. For example, imagine I want to convince you that your spouse was not with my best friend while you were out of town. Imagine further that I have an acquaintance who mentions that he saw your spouse at 3:00 p.m. in the hair salon and at midnight in a bar. If I choose not to ask my acquaintance whom your spouse was with, or if I only ask my acquaintance whom she was with in the hair salon and avoid asking the more probative question of whom she was with in the bar, then I am lying when I later tell you that to the best of my knowledge she was not with my friend. Strictly speaking, what I am telling you is true. But the lie occurred when I initially gathered information in a biased manner that served my goal of convincing you of my friend's innocence regardless of what the truth might be.
R9. Motivation can drive cognition
Kramer & Bressan make the interesting suggestion that belief in God might be an unintended consequence of efficient cognitive processes rather than evidence of self-deceptive processes that impose order and a sense of control on the world. Kramer & Bressan suggest that people who have stronger schemas are more likely to have their attention captured by schema-violating events, with the end result that they attribute supernatural importance to what are in essence coincidences. But what if motivation drives cognition? What if people who have a stronger than average motivation to see order in their universe (perhaps because they feel insufficiently resourceful to cope in the absence of order and control) are more likely to establish strong schemas when they receive any support for those schemas from the environment? Such strong schemas then provide them with the order that they desire.
From Kramer & Bressan's data we do not know why people have strong schemas or not – we only know that there are individual differences. If motivation influences the establishment of strong schemas, then Kramer & Bressan's data could be construed as support for our self-deception argument, because the people who crave order in the universe are more likely to see schema violations as meaningful events. The same argument holds for their description of the Amodio et al. (Reference Amodio, Jost, Master and Yee2007) study in which strong schemas were associated with the desire for strong government. Indeed, liberals have greater tolerance of ambiguity than conservatives (Jost et al. Reference Jost, Napier, Thorisdottir, Gosling, Palfai and Ostafin2007), which again suggests that motivation might drive schema strength rather than the other way around. In contrast to Kramer & Bressan's claim, schema strength is not evidence of “efficient memory and attentional processes”; rather, it is the flexible use of schemas in information processing (i.e., assimilation and accommodation) that enables efficient memory and attention. Kramer & Bressan conclude by noting that illusions and conspiracy beliefs could also result from reduced motivation to accurately analyze the situation among those who are unlikely to gain control. In contrast to this possibility, people with a low sense of control typically engage in effortful (but often unsuccessful) struggles to regain control and comprehension (e.g., Bransford & Johnson Reference Bransford, Johnson and Chase1973; Weary et al. Reference Weary, Tobin, Edwards, Arkin, Oleson and Carroll2010).
In contrast to the “accidental cognitive by-product” arguments of Kramer & Bressan, Gorelik & Shackelford suggest that religion and nationalism might be considered an imposed variety of self-deception. Their proposal extends the work of Kay et al. (Reference Kay, Gaucher, Napier, Callan and Laurin2008) and Norris and Inglehart (Reference Norris and Inglehart2004) by considering various ways that self-deception might intertwine with religious and nationalistic practice.
Mercier makes a more general version of Kramer & Bressan's argument by claiming that we need not invoke self-deception to explain self-enhancement; rather, it could be the result of an error management process that favors less costly errors. However, to support this argument he is forced to dismiss the cognitive load and self-affirmation findings, which he does by claiming that these manipulations reduce the tendency to engage in reasoning. Although cognitive load certainly disrupts reasoning, self-affirmation does not. For example, self-affirmation reduces some types of reasoning (i.e., self-image maintenance) while increasing other types (e.g., consideration of unwelcome information; for a review, see Sherman & Cohen Reference Sherman, Cohen and Zanna2006). Indeed, the different mechanisms but similar consequences of self-affirmation and cognitive load make up one of the reasons why they are so powerful in combination – one is motivational, and the other cognitive.
R10. Mental illness is associated with too little and too much self-deception
Preti & Miotto suggest that people with mental illness self-deceive less than mentally healthy people. We would agree that this is often the case (see Taylor & Brown Reference Taylor and Brown1988). Preti & Miotto conclude by noting that mirror systems might enable self-deceivers to detect deception better in others, at least when that deception is accompanied by self-deception. This is an intriguing prediction that also sets boundary conditions on Humphrey's proposal that self-deception might inhibit detection of deception in others.
Troisi argues that somatoform disorders are often a form of self-deception, and he describes an experiment by Merckelbach et al. (Reference Merckelbach, Jelicic and Pieters2010) in which people were asked to explain why they had endorsed a symptom that they had not in fact endorsed. Over half of the participants did not notice the switch and showed evidence that they now believed they had the symptom. Similarly, people who had earlier feigned a symptom showed evidence that they now believed they had it. These findings seem similar to the choice blindness studies of Johansson et al. in which people also unintentionally deceived themselves. Such somatoform disorders could be regarded as a cost to a system that allows self-deception – if the mind evolved to allow people to be able to convince themselves of desired outcomes, then it might also allow people to convince themselves of unintended or unwanted beliefs. As Troisi notes, however, somatoform disorders can be regarded as an interpersonal strategy intended to elicit sympathy and care.
R11. Conclusions
Our goal has been to show that a comprehensive theory of self-deception can be built on evolutionary theory and social psychology. Our commentators have raised areas where the data need to be strengthened, have noted alternative hypotheses, and have disagreed with us about the central tenets of our theory. They have also suggested refinements in logic and interpretation of the evidence and have made novel predictions. None of the empirical or conceptual challenges strikes us as fatal to the theory, and so it remains for future research to assess the merit of our ideas and how best to extend them. Call it self-deception if you will, but we think that our enterprise has passed the first test.
Target article
The evolution and psychology of self-deception
Author response
Reflections on self-deception