Hostname: page-component-7b9c58cd5d-nzzs5 Total loading time: 0 Render date: 2025-03-15T14:06:21.075Z Has data issue: false hasContentIssue false

Making Sense of Self-Deception: Distinguishing Self-Deception from Delusion, Moral Licensing, Cognitive Dissonance and Other Self-Distortions*

Published online by Cambridge University Press:  18 September 2017

Rights & Permissions [Opens in a new window]

Abstract

There has been no systematic study in the literature of how self-deception differs from other kinds of self-distortion. For example, the term ‘cognitive dissonance’ has been used in some cases as a rag-bag term for all kinds of self-distortion. To address this, a narrow definition is given: self-deception involves injecting a given set of facts with an erroneous fact to make an ex ante suboptimal decision seem as if it were ex ante optimal. Given this narrow definition, this paper delineates self-deception from deception as well as from other kinds of self-distortions such as delusion, moral licensing, cognitive dissonance, manipulation, and introspective illusion.

Type
Research Article
Copyright
Copyright © The Royal Institute of Philosophy 2017 

Introduction

The philosophical, psychological, business, management, and economics literature has, in different degrees and styles, addressed the problem of self-deception. The contribution of this paper resides in being the first attempt to distinguish – in a systematic way – self-deception from other kinds of motivated irrationality that stem from what is called here ‘self-distortion’. In many cases, the literature confuses self-deception with other kinds of self-distortion – often as a result of lumping them under the umbrella term ‘cognitive dissonance’. There is a need for academic self-discipline manifest in clearly defined terms that sharply distinguish the precise phenomena of interest.

This paper clarifies the phenomenon of ‘self-deception’ and demarcates it from other kinds of self-distortions that often also called ‘self-deception’. Despite the benefits of clear terminology, this paper is not so much about terminology as it is about necessary conceptual distinctions. After all, one can still use the term ‘self-deception’ in the broadest sense, which enables it to denote different kinds of self-distortion. Rather, the question is about distinguishing different phenomena of self-distortions and how one ought to be self-disciplined when using the terms that denote each separate phenomenon. Thus, one must make sure not to call the different kinds of self-distortion by the same name unless one makes it clear that such a name is a rag-bag term. In this paper, the term ‘self-distortion’ is a rag-bag term intended to capture different phenomena with a broad family resemblance, viz. how one hides or manipulates certain facts or beliefs in order to distort the view of one's conscious self.

This paper commences with the obvious question: Why would rational agents undertake self-deception? Why would they block certain information from the conscious self – especially when such information is free and such blocking involves mental tricks and mental gymnastics?

To illustrate the idea, let us start with the story of Adam and Eve in the Garden of Eden – in its simplest rendering and without digressing into Biblical scholarship. Once Adam and Eve had eaten from the tree of knowledge, they should have known that God – the omniscient Being – is aware of their sinful (suboptimal) act. Nonetheless, when God confronted Adam, he made a ‘serious’ effort to convince God otherwise, i.e. that the act was not sinful but rather justified (optimal) given the information. Adam advanced the following excuse: Eve told him it is fine to eat from the tree of knowledge. Adam truly believed that this excuse justified his act. Adam also knew that God was aware of the act. Thus, Adam was telling God about his intention – he thought his act was based on optimal belief: Eve's statement allowed him to do it. In stating this, Adam is ultimately communicating with himself: He is telling himself that he did not commit a suboptimal (sinful) act.

This story should suggest that self-deception is an act whose audience, in the final analysis, is the conscious self. Spectators are not essential. If Adam was trying to deceive God, he should have known better: He should have known that God already knows that Adam obeyed Eve. Adam was rather trying to do something else: Adam was trying to convince himself, all the while assuming that succeeding on this front would also convince God that he performed the optimal act. So, the ultimate target of Adam's excuse is himself. Adam's excuse is no different from justifications that we inject to make us feel better, i.e. avoid self-blame when we make suboptimal choices such as under-saving for a year, neglecting a particular diet for an evening, or failing to undertake exercise for a whole month.

How could self-deception be possible if the target is primarily the conscious self? Why would agents want to fool themselves – given it is an arduous and costly task? This is an important question if we want to understand the motives of leaders, where their actions have tremendous implications for the prosperity and development of their organisations ranging from government agencies to business firms.Footnote 1 Such implications are not investigated here. The task, first, is to make sense of self-deception.

As detailed elsewhere, self-deception involves a doubly suboptimal act.Footnote 2 First, the agent undertakes an ex ante suboptimal decision, as the case when Adam ate from the tree of knowledge. Second, the agent, in order to avoid the pain of remorse or self-blame, fabricates a ‘fact’ or invents an ad hoc interpretation of existing facts in order to make the ex ante suboptimal choice look as if it was optimal. In the case of Adam and Eve, Adam fabricated the fact that he was following the order of Eve – as if Eve's suggestion trumps God's. So, we can define self-deception succinctly:

Self-Deception: For an agent to commit self-deception, the agent must be, first, ex ante cognizant, at a deep level, that a decision is suboptimal. Second, the agent must invent a ‘fact’ or ad hoc reconstruction of facts to make the decision look as if it is optimal.

While self-deception necessarily entails the agent's deep belief that the act is suboptimal, not all suboptimal acts necessarily lead one to invent a cover-up. The agent may admit the suboptimal act, i.e. not try to fool the self.

In particular, in the act of self-deception, an agent who

  1. (1) knows ex ante at some deep level that decision X is suboptimal (e.g. buying an expensive watch is thoughtless and hence suboptimal); but, nonetheless,

  2. (2) executes decision X (e.g. buys the watch); and

  3. (3) invents an ad hoc reconstruction at a conscious level of the facts to make decision X seem optimal (e.g. ‘the watch is not expensive’).

This framework leads us immediately to an ironic conclusion: While self-deception is doubly suboptimal, we need the theory of rational choice to identify the suboptimal act (step #2) and the suboptimal fabrication of facts (step #3).

The doubly suboptimal nature of self-deception sets self-deception apart from other kinds of self-distortion. This paper identifies six other kinds of self-distortion:

  1. i) Deception: Self-deception, contrary to work in evolutionary psychology, is not the same as a roundabout technique, which is usually hidden from the self, to deceive others;

  2. ii) Delusion: Self-deception is deceiving one's self about a clear-cut fact, while delusion is deluding one's self about an aspiration or a goal. The aspiration or goal is never, by definition, a clear-cut fact. It is usually fuzzy, motivated by visionary drivers, imaginative, and hence cannot be easily questioned as is the case with clear-cut facts;

  3. iii) Manipulation: Self-deception involves manufacturing the past with new data upon which decisions are made. Manipulation, on the other hand, does not involve the manufacturing of new data. It rather uses the same data, but shaped with a new, ad hoc framework or ethical standard that replaces the pertinent one;

  4. iv) Moral Licensing: Self-deception is concerned with facts, and hence should not be conflated with moral licensing, which is about one's moral standing or moral capital;

  5. v) Cognitive Dissonance: Self-deception should not be confused with cognitive dissonance reduction, where cognitive dissonance reduction is understood as ‘bridging’ the painful gap between material utility and ethical utility;

  6. vi) Introspection Illusion: While self-deception is about covering up suboptimal act, introspection illusion (as best illustrated in ‘hindsight bias’) occurs when there is no suboptimal act committed. In introspection illusion, the agent simply wants to look as if he knew-it-all-along.

This paper proceeds to show precisely how these six other kinds of self-distortion differ from self-deception.

1. Self-Deception vs. Deception

A major thrust of evolutionary psychology is the modelling of what appears to be irrational as, ultimately, optimal once placed in an evolutionary story.Footnote 3 The theory of self-deception advanced by William von Hippel and Robert Trivers provides a good illustration of the evolutionary psychology approach.Footnote 4 , Footnote 5 , Footnote 6

They maintain that humans resort to self-deception to make it easier, i.e. avoid embarrassing biological symptoms to the contrary, for them to deceive others. Hence, for von Hippel and Trivers, self-deception ultimately arises from interpersonal deception: Agents deceive themselves in order to suppress the biological cues that may betray their intention to deceive others. According to von Hippel and Trivers, self-deception is optimal in two senses: First, when agents deceive themselves, agents are able to avoid the cognitive costs of consciously mediated deception of others. Second, by deceiving themselves agents can avoid harsh retribution by claiming that they did not intentionally deceive others.

If this is the case, the person could intentionally practice saying – in front of a mirror – that he or she is not a thief, while consciously knowing that he or she is a thief. In this manner, one can control biological symptoms so that when one tells others about his or her innocence, one can deceive them. There is no need to deceive the self in order to succeed in circumventing the biological symptoms associated with lying.

At a deeper level, why should anyone incur biological symptoms such as sweating when one is deceiving others? If there is a state of anarchy and war, it is legitimate to be a deceiver. After all, it is legitimate to try to rob, exploit, or even eat each other. If biological symptoms arise from lying, the state cannot be a state of anarchy or war. The state must rather be of cooperation, reciprocity, and mutual respect of rights. If so, the lying or deception amounts to an aberration from the implicit or explicit social contract that assures cooperation. And such aberration, given that the social contract expresses the optimal institution, amounts to suboptimality, i.e. acting contrary to one's non-myopic interest. So, the deception of others, the rock upon which von Hippel and Trivers’ theory rests, cannot be optimal when biological symptoms are involved. If such deception is suboptimal, self-deception could never be optimal – which is the thesis of this paper.Footnote 7 , Footnote 8

Further, if the ultimate purpose of self-deception is primarily the deception of others, this strategy usually does not work. Spectators are usually adept at detecting self-deception. In the least, self-deception is more likely to deceive the self than to deceive others. While the self-deception mechanism can – on some occasions – succeed in deceiving others, and hence may succeed at allowing one to improve their wellbeing through deception of others, one is more likely to deceive the self and instead inflict self-harm. If we take, prima facie, that the benefit from deceiving others equals the harm of self-deception, and given the greater likelihood of the failure of the deception of others, one could conclude that the self-deception mechanism, as a trait, is on balance a flawed trait. Given these reasonable assumptions, the self-deception trait generates on average more harm than benefit. Hence, the self-deception trait could not, at first approximation, have been selected by the evolutionary Darwinian mechanism.

A further critique is that the theory of von Hippel and Trivers makes it harder to explain self-deception when the injured party is directly the future self. For instance, when agents resort to concocted stories to deceive themselves about their over-eating, over-indulgence, or under-saving, there are usually no spectators present. Specifically, the presence of spectators is not a necessary condition for self-deception – as is glaringly obvious in cases where an agent deceives their future self.

In response, von Hippel and Trivers, or their supporters, may advance the following defence: the future self – the object of deception by the present self – is analytically identical to contemporaneous other agents. As such, the future self can retaliate, through weapons such as guilt and blame, to inflict injury on the present self if the present self has cheated, i.e. acted in a manner that is inconsistent with the Pareto optimal solution. To protect itself from self-blame, the present self inserts fibs – as it does when the injured party is another person.

However, this defence assumes what it tries to refute: it must presuppose that the original act, which is injuring the future self, must be Pareto suboptimal – i.e. it deviates from rationality. Yet von Hippel and Trivers want to argue that the original act is rational and the act to cover it up, self-deception, is a further strategy to increase the likelihood of success of the original act.

To note, D.S. Neil Van Leeuwen levels a criticism of Trivers’ theory of self-deception that appears similar to the one advanced here.Footnote 9 Leeuwen relies on another evolutionary theory, widely known as the spandrel theory of evolution after the spandrel metaphor from architectural design.Footnote 10 According to the spandrel approach, many phenotypic features are suboptimal. Leeuwen argues self-deception is such suboptimal phenotypic feature: self-deception is a by-product that can be wasteful, as opposed to optimal.

While the spandrel approach presents self-deception as wasteful (suboptimal), it implies that self-deception is inevitable as the case with the other phenotypic features modelled as spandrels. The notion of inevitability here means that the feature is outcome of past history, where the agent has no incentive to change it even if the feature is currently wasteful. One may argue, which cannot be done here, that even when a feature is wasteful, it does not mean it is suboptimal. Stated briefly, if the agent finds it too expensive to remove a wasteful feature, the toleration of the wasteful feature is optimal. Footnote 11 , Footnote 12 Thus, self-deception, if modelled as spandrel, is ultimately optimal, which is contrary to the view proposed here.

2. Self-Deception vs. Delusion

Self-deception and delusion are essentially treated equivalently in the literature. For instant, Bortolotti and Mameli recognize some differences between the two, but ultimately consider the two as lying along a continuum as if they belong to the same epistemic structure. Further, Alfred Mele defines self-deception as ‘motivated belief’, i.e. a belief manipulated by one's desire rather than the facts.Footnote 13 , Footnote 14 But this starting point makes it difficult to distinguish self-deception from delusion. While delusion and self-deception involve a mistaken belief, the nature of the mistaken belief in each self-distortion is radically different. When one starts with mistaken belief as the entry-point, as if there is only one kind of belief, it is natural for one to confuse delusion with self-deception, as Mele does. Consequently, Mele reaches an impasse, as Mele admits.Footnote 15

To detail, Mele's theory uses the medical cases of delusions, such as the Cotard delusion and the Capgras delusion, as the exemplars of delusion. Such cases are usually caused by head injury or other added circumstance. This confines his discussion to the formation of mistaken beliefs as the entry-point of analysis. By starting the discussion with mistaken beliefs, one is trapped: one cannot easily delineate self-deception from delusion.

If one instead starts with rational choice, it becomes easy to see the difference between self-deception and delusion. To recall, self-deception involves fibs inserted sub-consciously to cover up suboptimal actions from the conscious self – where such actions violate commitment, promises, and so on, to respect the rights of others or the rights of the future self. There is a central concept in this definition: the optimal choice (whether a single act or a rule/commitment) is the outcome of the axioms mentioned above and, hence, part of one's budget constraint. That is, self-deception is about self-distortion concerning the ‘facts’ that make up the budget constraint.

In contrast, delusions are about self-distortion motivated by one's ambitions, aspirations, and goals and, hence, mainly concerns the preference set. To aspire, to be positively disposed, to hold somewhat unrealistic (suboptimal) expectations of control, and to desire, as long as these emotions are moderate, they are rather healthy emotions. They can be based on self-image or assessment of one's character traits and abilities. As such, the aspiration and self-image contribute to one's happiness or ‘subjective well-being’, as Amir Erez et al. show.Footnote 16

Actions based on dreaming and aspiring stem from a commitment other than the one that is clearly within one's budget constraint. This commitment is about the assessment of one's ability – whereas self-ability cannot be a clear-cut fact as those facts which are distorted in the cases of self-deception. The commitment, which could become a case of delusion, is about one's belief about one's potential. This belief determines how earnest one should be in the pursuit of a career or a goal. A young male actor might aspire to become the next Tom Hanks or the next Brad Pitt, or a young economist might aspire to become the next Kenneth Arrow or the next Milton Friedman. Such aspiration can be based on the belief that achievement of fame or riches is the source of one's happiness.

Such a belief might be feasible, slightly feasible, non-feasible, greatly non-feasible, or a clear case of grandiosity. Even when the belief is utterly grandiose or unrealistic, we cannot judge the belief as suboptimal because the belief about one's ability is not a clear-cut fact that can be utterly proven to be correct or mistaken. The belief concerning ability is about desiring and aspiring – where everyone is entitled to pursue his or her dream and, hence, one cannot disprove, as mistaken, the belief about one's ability. In contrast, we can judge the succumbing to temptation as suboptimal insofar as the belief concerns a choice that is clearly not about dreams and aspirations, but rather within one's budget constraint.

To put it differently, while self-deception and delusion share a commonality, they involve a difference:

Commonality: A delusion, similar to self-deception, amounts to covering up the gap between one's reality – say a 70-year old actor who has not yet appeared in any successful film – and the desire, aspiration, or dream to become the next Tom Hanks. One who refuses to recognise, because of the painful feeling of misery, of the gap between current status and desired status, would resort to mendacity or a set of rosy tales to cover up the gap. Such an idea is exemplified in the plays of Tennessee Williams – especially by Street Car Named Desire – in which the common theme of tortured souls who slip into a world of mendacity and delusions as they try to cover up the gap between their current status and that status which they desire.Footnote 17

Difference: The gap that underpins the delusion involves a dream or a desire that cannot be modelled as optimal or suboptimal. In contrast, the gap that underpins self-deception involves a commitment, such as the respect of rights of others, which is optimal or within one's budget constraint. So, the difference between self-deception and delusion is simple: In self-deception, there is a decision that can be judged as optimal or not, while in delusion, there is a desire or preference that is orthogonal to the question of optimality or rationality.

In the case of the self-deception, the agent must have committed a suboptimal act – which he or she tries to cover up. In the case of delusion, the agent cannot have committed a suboptimal act – or an optimal act for that matter. There is nothing suboptimal about a 70-year old actor, who has not had a successful film, to believe that he has the potential to become the next Tom Hanks – even when we can say that such aspiration is highly non-feasible.

Given this difference, there is a similarity between self-deception and delusion: In both cases, whether it is the very non-feasible self-belief or the suboptimal decision, they turn around to manufacture fibs, i.e. appeal to unrelated facts, mis-facts, or sudden new valuations (perspectives) of the same given facts. It is here when we can say that self-delusion is non-rational. But we cannot have judged the dream or aspiration as such – as we can with the suboptimal decision in the case of self-deception.

To illustrate this, consider that there is nothing optimal or suboptimal about Don Quixote's self-belief, viz. his aspiration to dream big and become the greatest knight of his generation. At best, we can judge that the self-belief is greatly non-feasible given Don Quixote's age, resources, and historical circumstance – where the values of chivalry have vanished with the collapse of feudalism in Spain. Further, as long as Don Quixote acknowledges that his self-belief is highly non-feasible, i.e. acknowledges the gap, spectators would judge him as a hopelessly romantic soul. However, once Don Quixote's self-belief starts to invent facts concerning windmills as giants and his great feats of valour to cover up the gap, he would be irrational in the sense of self-delusional. But Don Quixote would not be judged as involved in self-deception as defined in this paper.

So, delusions and self-deception share the common feature of covering up the facts, which is irrational. But they diverge with regard to whether the objective of the cover up is rational or cannot be judged along this line.

The difference arises from how self-belief that is highly non-feasible, unlike succumbing to temptation, cannot be modelled as suboptimal. The aspired self-belief or the imagined self-ability cannot be stated in optimal terms for a simple reason. One's ability keeps on developing as one tries to find what it is. That is, as one acts according to one's image of ability, such as to become a great actor, one's acting ability evolves as a consequence of testing one's belief about the ability. So, the agent can never state what self-ability is in an optimal fashion. The mere act of stating it, in light of evidence, makes the ability develop as evidence is collected.

So, every person has an image of ability, which cannot be confirmed or disconfirmed. Such self-image acts as a ‘healthy delusion’ insofar as the agent is aware of the gap and that such gap cannot be covered up. That is, the rosy image of one's ability is not delusional as long as one does not pretend that reality is identical to the imagined one. The rosy image is not irrational as long as one recognises that the projected or desired image is a fictional dose of hope.

The distinction between self-deception and delusions can shed light on confusions on the part of some philosophers. For instance, Mele dismisses the irrational aspects of self-deceptionFootnote 18 , Footnote 19 – aspects identified by Donald DavidsonFootnote 20 , Footnote 21 and David Pears.Footnote 22 , Footnote 23 But Mele actually has in mind the construction of motivational self-image, i.e. what one would aspire to be. As such, Mele calls the actions that are motivated by aspiration, desire, or goal-seeking as an orientation that actually generates ‘positive misinterpretation’. This is similar to the view of Shelley Taylor who regards a motivational self-image as healthy for the mind.Footnote 24 But such self-image, which can be seen as ‘healthy delusion’, should not be confused with self-deception as defined here. Mele and Taylor are discussing a gap that exists between actual achievement and aspired achievement – which differs from the gap that precedes self-deception. In self-deception, the first decision involves creating a gap between behaviour and what is deemed as the optimal behaviour. And aspired achievement cannot, as already noted, be ultimately expressed as an optimum belief.

Likewise, economists – such as Robert OxobyFootnote 25 , Footnote 26 and Roland Bénabou and Jean TiroleFootnote 27 – confuse matters as a result of their failure to distinguish self-deception from delusions. For instance, when Oxoby notes the gap, he interprets the gap as the difference between actual status and desired status. So, Oxoby is discussing delusion. Recent work that characterises cognitive dissonance as relating to the gap between actual status and desired status is – ultimately – about delusional images of the self.Footnote 28 The issue is not about terminology. Rather, it is about distinguishing delusion from other kinds of self-deception.

For Oxoby, to bridge the gap, people can work hard to raise their status to the desired level. Alternatively, Oxoby argues, people can opt out of the social metric of status entirely and join the ‘underclass’. The underclass consists of people who no longer regard status as a desired goal. They totally ditch work ethics and become engaged in welfare dependence, lethargy, criminality, and substance dependence.

There is a problem with Oxoby's analysis, though. To bridge the gap, the agent can also lower the desired status, the ambition, or the aspiration level. That is, the agent can choose a lower peer, where there is less pressure to perform and achieve. This highlights that there is a radical difference between self-deception and the choice to join the underclass. In self-deception, the belief cannot be changed, and hence the agent injects wrong information, i.e. resorts to fibs or mis-facts. This gives rise to new output that justifies the behaviour. When one chooses the underclass, the agent does not lower the desired status, but rather the agent simply gives up on all desire by rejecting work ethic.

Oxoby's model is useful to model the underclass, i.e. people who reject aspiration and any metric used by high- and low-classes to aspire and achieve. That is, people who opt for the underclass – to put it in terms of this paper – are people who even reject ‘healthy delusions’ – i.e. the dreams to aspire when the agent is aware of the gap between the dream and reality. On the other hand, people who opt to become part of the underclass simply give up on enduring the usual human condition: to live with the normal pain that arises from the gap between one's actual condition and one's aspired condition.

So, Oxoby's model cannot be about the gap occasioned by the first decision, i.e. the decision that precedes the second one concerning self-deception. The difference between the two gaps, as already suggested, ultimately arises from the distinction between two kinds of commitments: obligatory commitment – such as the prohibition against cruelty to animals, prisoners, children, and so on – and goal commitment – such as career ambition and desired status. Given that the focus here is on self-deception, the concern is with the violation of obligatory commitment and how the agent tries to cover it up. The violation of goal commitment, i.e. the choice of underclass where aspiration is rejected, cannot be part of the study of self-deception.

Bénabou and TiroleFootnote 29 and H. Peyton YoungFootnote 30 provide models that are mainly concerned with the question of identity: how the self perceives its worth in terms of integrity, not compromising with a sacred issue, or not ready to violate a taboo? But the concern with identity must ultimately be about forward-looking calculations, i.e. goal commitment. Even when identity, as these economists note, is grounded in the past, it is ultimately about future-looking calculations. One who has kept promises in the past, i.e. has integrity, can make new promises with confidence, which others can detect in most cases.

That is, if one had maintained past promises when he or she could have cheated, one can enter new contracts with ease because one can keep promises even if there are opportunities for cheating. New contracts and promises are needed if one wants to pursue a career. As such, the identity deposited from the past feeds into goal commitment. And to pursue goal commitment, the agent needs qualified ‘delusions’ as already noted. While the pursuit of goal commitments is crucial for any agent, and such pursuit requires some self-concepts that are partially ‘delusional’,Footnote 31 , Footnote 32 such pursuits cannot be judged as rational or irrational – and hence the delusions, even when serious, are not about self-deception.

3. Self-Deception vs. Manipulation

Manipulation slightly differs from self-deception. To recall, self-deception is about changing the hard facts of the matter, or how they are interpreted, in order to deceive the self into making a suboptimal choice. Manipulation rather involves the introduction of unrelated different ethical standard in the act of justification of the suboptimal decision – while others, including the self, are kept unaware that the new ethical standard is unrelated. For instance, a dispute may arise between friends or even husband and wife concerning an agreement or a particular responsibility. One may suddenly appeal to another standard, viz. communal solidarity or love, as the relevant standard. And thus, one would be excused for failing in one's responsibility, while the other has the communal duty to accommodate such failure.Footnote 33

4. Self-Deception vs. Moral Licensing

Social psychologists started recently to discuss the phenomenon of ‘self-licensing’, ‘moral credentials’, or ‘moral licensing’.Footnote 34 , Footnote 35 , Footnote 36 Anna Merritt et al. provide a succinct and useful definition of moral licensing:

[M]oral self-licensing … occurs when past moral behavior makes people more likely to do potentially immoral things without worrying about feeling or appearing immoral.Footnote 37

In this definition, there is no criterion of how to distinguish the moral from the immoral act.Footnote 38 In any case, assuming we know the difference, moral licensing is the use of one's moral virtue or what can be called ‘moral capital’, as accumulated from past actions, to rationalise or justify an unrelated action. Such use serves as a self-excuse to commit a suboptimal (unethical) act. For example, one could have volunteered at the local hospital in the previous year, and such act gives the agent a moral license to embezzle funds or to take bribes.

In the definition above, one first commits a moral act, and then uses it as a cover or as an excuse to commit an immoral or suboptimal act. In self-deception, one first does the immoral or suboptimal act and then tries to cover it up with the insertion of mis-facts.

But a deeper difference between the two – as revealed in the definition above – is that the use of a past moral act as an excuse for transgression amounts to insincerity or acting with bad faith. One can be insincere while fully aware of his or her insincerity. Thus, moral licensing does not need necessarily involve self-deception. The transgressor, though, usually covers the insincerity with self-deception. Nonetheless, the two phenomena are distinct.

5. Self-Deception vs. Cognitive Dissonance

5.1 Cognitive Dissonance and Complacency

Akerlof and Dickens motivate the introduction of cognitive dissonance to the study of behaviour of workers with an anecdote.Footnote 39 Workers in dangerous jobs or workers who handle toxic material are complacent: they work without the protective gear. They do so as a result of a fabricated belief that the environment is not dangerous, which they adopt to suppress their true belief, viz. the environment is dangerous. This fabricated belief amounts to covering or obfuscating a gap between ‘suboptimal behaviour’, i.e. working without protective gear, and ‘true belief’, i.e. the environment is dangerous. The workers avoid the contradiction by changing arbitrarily their belief about the environment. They adopt a belief – viz. ‘this environment is not dangerous’ – in order to ease the ‘dissonance’ between the belief and the actual choice.

This is consistent with the original meaning of the term ‘cognitive dissonance’ in the psychological literatureFootnote 40 , Footnote 41 – which should not be confused with more recent broad usage discussed above. A good example of the original meaning in the management literature is the concept of ‘emotional dissonance’. As Andrew Morris and Daniel Feldman amply document, the requirement to show emotions according to organisational rules involves exhaustive emotional labour.Footnote 42 Such labour may engender emotional dissonance when the employee has to express organisationally desired emotions when the employee does not really feel them.

Another example of the original meaning of cognitive dissonance, which is little noted, is ‘derogating the victim’ or ‘blaming the victim’.Footnote 43 If people suffer from misfortune, whether caused by nature or by unjust rulers, they must deserve it. Such blaming can be traced to a bias, which Lerner calls ‘just world’ theory.Footnote 44 People ultimately uphold an image of themselves as fair and ready to stand for equity. In an organisation at which they work, e.g. they witness that the bosses abuse a minority group. The ethical people feel that they have to take a stand. But this may risk their job or the prospects of advancement. So, they adopt a fabricated belief that the performance of the minority group is mediocre. So, they are treated fairly. Here, the ‘ethical’ people fabricate a new belief to obfuscate the gap between their complacent (suboptimal) behaviour and ethical stand.

While the obfuscation of the gap between suboptimal behaviour and true belief (cognitive dissonance) seems similar to self-deception, it is not. Cognitive dissonance is not necessarily about ex post justification, while self-deception is. Further, the fabricated belief in cognitive dissonance – e.g. the environment is not dangerous, or the minority group deserves the unequal treatment – has tremendous consequences. Chiefly, it leads to self-destructive complacency: workers fail to wear protective gear; a corporate culture of racism and inequitable treatment of minorities. How can one explain cognitive dissonance if it leads to self-destruction?

Akerlof and Dickens provide a model that depends on three premises: (1) people prefer to believe that their work is safe; (2) people have partial control over their beliefs: they can manipulate the information in order to support the ‘desired’ fabricated belief;Footnote 45 (3) the belief, once chosen, persists over time. Workers in dangerous jobs select the belief that their wellbeing is safe in order to avoid the anxiety and worry associated with the evidential ‘true’ belief. So, as long as the marginal benefit exceeds the marginal cost of possible injury, the self-distortion is rational. That is, workers behave rationally when they fool themselves because, in effect, such self-distortion frees them from anxiety and worry.

Such an approach would be plausible if the workers have no other options: they cannot quit their jobs and they have no access to precautionary measures. But we know that they have access to precautionary measure – and they keep them in their lockers unused. So, the anxiety and worry that the workers try to avoid can be avoided in a different and even more effective way. The problem with Akerlof/Dickens’ model is that it takes the anxiety or worry as a datum, i.e. as disutility similar to how effort or injury is a disutility. Anxiety or worry, though, cannot be a datum because it is supposed to prompt the person to formulate a rational belief, i.e. which entails taking the necessary precautions such as wearing protective facial covers and helmets in order to minimise injury. So, even if a paternalist government passes a safety regulation to force them to wear facial covers or helmets, such a regulation would reduce their wellbeing, given the model of Akerlof and Dickens. Of course, Akerlof and Dickens recommend such regulation, but this cannot flow from a model that shows self-distortion to cover the gap (cognitive dissonance) as rational.

Similar to Akerlof/Dickens’ model, Rabin takes the gap between behaviour and belief as given.Footnote 46 In contrast, as argued here, the gap rather expresses suboptimal choice, i.e. the succumbing to temptations. In Rabin's model, the belief is moral – instead of just simply being a belief about the safety of the workplace. But this difference is questionable. It ultimately rests on a supposed difference between some ‘ethical utility’ and optimal ‘material utility’. If we see that ethical utility is optimal, then the material utility is simply a lure, a temptation to deviate from what is the optimal (ethical) choice.Footnote 47

Rabin's anecdotal motivating story is the gap between wearing furs and the moral belief against cruelty towards animals. This gap gives rise to disutility in the sense of uneasiness. As in Akerlof/Dickens’ model, people cover it with the belief that the harvesting of furs does not amount to cruelty to animals. Rabin does not characterize the gap between behaviour and moral belief as irrational, arising from the temptation to violate one's optimal (moral) belief. Therefore, it is natural for him to conclude that the self-distortion aimed at reducing the disutility of pain must be rational. Similar to Leon Festinger (1957, 1959), the one who coined the term ‘cognitive dissonance’, Rabin characterises ‘direct’ and ‘indirect’ effects of the gap. For the direct effect, the greater is the disutility arising from the gap, the greater is the tendency of people to avoid wearing furs. But he is more interested in the perverse indirect effect: the greater is the disutility, the more people behave immorally because they can resort easily to self-distortion known as bridging the gap of cognitive dissonance. Such self-distortion may increase immoral activity, which can be even accentuated with social interaction.

Similar to Akerlof/Dickens and Rabin, Konow proposes a utility function reflecting the power of the agent to allocate resources between the self and others.Footnote 48 The utility function includes ‘material utility’ (i.e. the usual utility free from fairness principles) and also a part that is concerning fairness. Konow's model takes the taste for fairness as a given taste that populates the utility function along with ‘material utility’. The agent would want to increase his or her own ‘material utility’, which comes into conflict with the fairness principle. The agent would have, as in Rabin's model, to modify behaviour to accommodate the principle or, via bridging the gap (cognitive dissonance), assert a convenient belief that the behaviour is not in violation of the fairness principle. Konow, similar to the models of Akerlof/Dickens and Rabin, takes the conflict or gap as a given and, hence, self-distortion known as cognitive dissonance is covering the gap up. He conducts a dictator game experiment to find out why dictators may not share according to the fairness principle. There are two possible hypotheses:

  1. 1) people have different conceptions or definitions of fairness, and hence, favour material utility differently;

  2. 2) people face different opportunities that allow them wide-ranging possibilities to employ covering the cognitive dissonance.

Konow finds greater support for the latter hypothesis. He reasons that the wide dispersion of experimental findings in dictator games about the behaviour of the dictator probably arise from different instructions or minute differences in experimental design. Such differences provide dictators diverse opportunities to resort to self-distortion to cover the cognitive dissonance.

The findings of Jason Dana et al. Footnote 49 and Larson and CapraFootnote 50 could be interpreted in support of Konow's thesis. Dana et al. confirm the standard finding that dictators behave equitably when matched in binary fashion with recipients. But when the recipients' payoffs are uncertain, the same dictators deviate from the equitable principle – even though it is costless for dictators to remove the ambiguity and find out about the recipients’ payoffs. As Konow and Dana et al. suggest, the ambiguity of payoffs provides moral wriggle room: it allows dictators to insert fibs more easily, i.e. resort to self-distortion of covering cognitive dissonance, in order to allow them to deviate from the equitable distribution.

However, the line followed by Konow and Dana et al. implies that self-distortion to cover cognitive dissonance is rational: it allows the agent to gain more ‘material utility’ without sacrificing the self-image that the agent is fair. This is a direct outcome of Konow's model that takes the gap as given, i.e. as if the equity principle is unrelated to ‘material utility’ – and hence, their conflict is a datum.

In contrast, this paper argues that self-deception arises from the sub-conscious assessment by the agent that the behaviour is ex ante suboptimal. In case the agent holds a prior belief – ethical principle or otherwise – that prohibits the behaviour, it accentuates even further the agent's assessment that the said behaviour is ex ante suboptimal. So, the literature in economics concerning the gap (cognitive dissonance) between behaviour and belief, similar to the one in psychology, starts from the wrong entry-point. It starts with the supposition that behaviour is ex ante optimal. Given this wrong entry-point, and to come to grip with the gap, it has to regard the ethical belief as extra-rational or else it cannot be explained endogenously via rational choice model.

But if the entry-point were that the behaviour is ex ante optimal, why would the agent hold such an ethical belief to start with – to the point of resorting to a fabricated belief to cover the gap up? Instead, the agent would remove or delete the previously held belief in light of ex ante rational behaviour. This is what Bayesian rational agents do, or proclaim to do, and there is no need for the gymnastics of resorting to a cover up. That is, if ex ante behaviour is rational, there is no need for a literature on a special phenomenon called ‘cognitive dissonance’.

But if we start with the fact that the phenomenon under discussion, i.e. cognitive dissonance, is about self-distortion, it behooves researchers to choose an alternative entry-point. Namely, researchers should start with view that the said behaviour is ex ante suboptimal. They should then proceed to explain why people tend to deny that they have succumb to temptation, and construct instead elaborate mental gymnastics to present it as optimal – or how this paper defines self-deception.

5.2 Cognitive Dissonance and Ideological Hegemony

Marxists usually face a puzzle: Why does the proletariat class in the capitalist system broadly adopt a belief that is basically the ideology of its oppressor, i.e. the ideology of the capitalist class? Such an ideology is clearly contrary to the ‘true belief’ of the proletariat class. Friedrich Engels gave the first answer:Footnote 51 The proletariat that raises the flag of its enemy must be suffering from ‘false consciousness’: The proletariat class must be obfuscating its true belief when it ‘accepts’ the ideology of the capitalist class.

The reason might be similar to the fabrication of belief to avoid the agony of cognitive dissonance. After all, the proletariat has to work, and its labour supports the capitalist class. So, to avoid cognitive dissonance between suboptimal behaviour and true belief, the proletariat adopts the ideology of the oppressor, which amounts to what Antonio Gramsci dubs the ‘manufacture of consent’.Footnote 52 The proletariat class consents to its domination not out of true belief, but out of the manufacturing or fabrication.

The idea of false consciousness or the hegemony of the ideology of the powerful is not limited to Marxist thinkers. Classical liberal thinkers such as Timur Kuran,Footnote 53 has coined the notion of ‘preference falsification’ to denote a similar phenomenon.Footnote 54 Kuran tries to solve the following puzzle: if people accept the principles of a liberal society with minimal government, why do they support affirmative action and the welfare state, i.e. institutions that promote government intervention? For Kuran, the institutions that underpin the welfare state are obviously imposed on the population; they cannot correspond to the true belief of the population in a classical liberal setting. For Kuran, the population must be involved in suppressing its true preferences.

Yet why, in a democracy, would the population suppress – what he calls ‘falsify’ – its true preferences? The population, even in a democracy, is under social pressure, viz. the pressure to conform out of the anxiety to appear as racist or as politically incorrect. Kuran's mechanism, viz. the hiding of true beliefs, work because people care about their persona or public image.

People, in Kuran's world, can be fully aware of the tension between their overt and covert preferences. So, Kuran's mechanism does not exactly match cognitive dissonance, i.e. the fabrication of beliefs to obfuscate true beliefs.

However, it is a short step from Kuran's preference falsification to Engels’ false consciousness. If people confront Kuran and tell him that they truly support affirmative action and the welfare state, Kuran would have to resort to preference falsification, if he intends on keeping the liberal principles as the only true principles people should adopt. Here is the short step that Kuran would take: Kuran would have only to add that people find it stressful to behave contrary to their private principle (ethical stand). So, they would cover the gap with the fabrication they adopt, viz. they adopt the false belief that affirmative action and the welfare state are preferable to liberal principles.

6. Self-Deception vs. Introspection Illusion

Psychologists have long noted the flexibility of one's subconscious processes. One can easily manipulate subconscious processes to generate desired beliefs. Psychologists have documented cases where people ‘report’ more about their subconscious processes than it is possible for them to have access to. These cases have been discussed in the context of cognitive dissonance,Footnote 55 illusion of introspection,Footnote 56 , Footnote 57 , Footnote 58 illusion of will,Footnote 59 , Footnote 60 choice blindness,Footnote 61 and the manipulation of beliefs to avoid the terror of mortality.Footnote 62

For instance, people report why they prefer picture A to picture B, when they have been deceived or manipulated to think that they had chosen picture A, when in fact they had earlier chosen picture B. Another instance, let us take an agent who professes that X will win, and Y will lose, a game. After the game, if Y wins, the agent would tend, in significant cases, to maintain that he or she actually all-along predicted that Y will win. Such introspection illusion seems to be the root of hindsight bias, known as ‘knew-it-all-along’.Footnote 63 People tend to rearrange what they had stated in the past in order to make them appear as if they have predicted the current state all along.Footnote 64

One should not confuse introspection illusion with cognitive dissonance. While cognitive dissonance is about fabricated beliefs to justify one's action, introspection illusion is about one's distorted memory of past events where such events are conveniently rearranged.

Also, introspection illusion is not the same as self-deception. When the person predicted who would win the game, the person was ex ante rational. The fact that the person, ex post, manipulated his or her memory is not because of trying to appear or feel ‘rational’. Rather, the agent seems to derive a special utility from the sense of being a good predictor of what the agent realises to be random shocks. It is similar to the pleasure associated with the ‘hot hand fallacy’, i.e. from predicting what they know to be non-predictable.Footnote 65

In short, introspection illusion amounts to the insertion of fibs into the past, similar to self-deception. Yet unlike self-deception, the fabrication of the past cannot be based on making ex ante suboptimal decisions.

7. Conclusion

This paper commences with the story of Adam and Eve. It illustrates that people tell themselves excuses in order to ease the pain of self-blame. Such stories highlight that self-deception is an act whose ultimate audience is the self rather than spectators. But how could the self ever deceive the self? It does so in two acts: The agent must know that he or she is involved in choosing a suboptimal option – and in turn, the agent tries to cover it up with a mis-fact or an unwarranted interpretation of the available facts. But to defend such a theory, we must first answer how rational agents could undertake a suboptimal act.

This is possible once we distinguish two senses of rationality, where the two senses can be delineated because of weakness of will (temptations): ‘decision sense’ vs. ‘command sense’ of rationality. The decision sense amounts to abiding by the axioms of rational choice theory that ensures a unique or multiple optima given the stimulus. The command sense guarantees that the agent, in the face of temptations, actually carries out the action that corresponds to the unique optimal choice. So, the command sense of rationality presumes the decision sense, but not vice versa.

This distinction allows us to provide the necessary conditions for self-deception. Self-deception is possible only if the agent's choice can possibly deviate, as a result of temptations, from the optimal, as the agent defines the optimal. If the agent succumbs to temptation, the agent may try to cover up the suboptimal act via self-deception. If people admit that their choices were suboptimal, i.e. the choices were ex ante suboptimal, then self-deception is unnecessary. But people who succumb to temptations may additionally deny that they have committed suboptimal actions in order to avoid the pain of self-blame, where the blame undermines image-management or the agent's self-concept.

In this light, this paper clarifies many confusions in the philosophical, psychological, and economics literature. First, self-deception is concerned with clear-cut facts, while delusion involves aspiration – actions motivated by visionary drives and imagination. Second, self-deception entails the manipulation of the data or inputs in decision-making, while delusion is not necessarily about manipulation of the data. Delusion is rather about offering a different interpretation or context of the same data. Third, self-deception is concerned with facts, and hence should not be conflated with moral licensing, which is about one's moral standing or moral capital. Fourth, self-deception should not be confused with cognitive dissonance reduction, where cognitive dissonance reduction strictly speaking is about ‘bridging’ the painful gap between material utility (actual action) and ethical utility (commitment). Such bridging may involve self-deception, but this need not necessarily be so. Bridging could involve an appeal to ‘emergency’, ‘necessity of survival’, or a manipulation that permits pragmatism. Fifth, self-deception, contrary to work in evolutionary psychology, is not a roundabout technique to deceive others. Sixth, and finally, self-deception slightly differs from introspection illusion such as hindsight bias. Introspection illusion is more driven by the desire to appear as knew-it-all-along, while self-deception is more specific. It is employed only to make ex ante suboptimal decision appear as ex ante optimal.

Footnotes

*

This paper benefited from the comments of Robert Trivers, William von Hippel, Gerd Gigerenzer, Robin E. Pope, Richard Posner, Haiou Zhou, Jonathan Baron, Philip T. Dunwoody, Andreas Ortmann, Steven Gardner, Athena Aktipis, and anonymous referees. It also benefited from the research assistance of Michael Dunstan and Peter Lambert. The usual caveat applies.

References

1 Caldwell, C., ‘Identity, Self-Awareness, and Self-Deception: Ethical Implications for Leaders and Organizations’, Journal of Business Ethics 90(Supplement) (2009), 393406 CrossRefGoogle Scholar.

2 Khalil, E.L., ‘Self-Deception as a Weightless Mask’, Facta Universitatis, Series: Philosophy, Sociology, Psychology, and History 15 (2016), 111 Google Scholar.

3 Cosmides, L. and Tooby, J., ‘Better than Rational: Evolutionary Psychology and the Invisible Hand’, American Economic Review, Papers and Proceedings 84 (1994), 327332 Google Scholar.

4 von Hippel, W. and Trivers, R., ‘The Evolution and Psychology of Self-Deception’, Behavioral and Brain Sciences 34 (2011a), 116 CrossRefGoogle ScholarPubMed.

5 von Hippel, W. and Trivers, R., ‘Reflections on Self-Deception’, Behavioral and Brain Sciences 34 (2011b), 4156 CrossRefGoogle Scholar.

6 Trivers, R., ‘The Elements of a Scientific Theory of Self-Deception’, Annals of the New York Academy of Sciences 907 (2000), 114131 CrossRefGoogle ScholarPubMed.

7 Khalil, E.L., ‘The Weightless Hat: Is Self-Deception Optimal?’, Behavioral and Brain Sciences 34 (2011), 3031 CrossRefGoogle Scholar. (A commentary on W. von Hippel and R. Trivers, ‘The Evolution And Psychology Of Self-Deception’).

8 Khalil, E.L., ‘Self-Deception as a Weightless Mask’, Facta Universitatis, Series: Philosophy, Sociology, Psychology, and History 15 (2016), 111 Google Scholar.

9 Van Leeuwen, D.S.N., ‘The Spandrels of Self-Deception: Prospects for a Biological Theory of a Mental Phenomenon’, Philosophical Psychology 20 (2007), 329348 CrossRefGoogle Scholar.

10 Gould, S.J. and Lewontin, R.C., ‘The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme’, Proceedings of Royal Society, London B, 205 (1979), 581–59CrossRefGoogle Scholar.

11 Marciano, A. and Khalil, E.L., ‘Optimization, Path Dependence and the Law: Can Judges Promote Efficiency?’, International Review of Law and Economics 32 (2012), 7282 CrossRefGoogle Scholar.

12 Khalil, E.L., ‘Lock-in Institutions and Efficiency’, Journal of Economic Behavior and Organization 88 (2013), 2736 CrossRefGoogle Scholar.

13 Mele, A., Self-Deception Unmasked (Princeton: Princeton University Press, 2000)Google Scholar.

14 Mele, A., Irrationality: An Essay on Akrasia, Self-Deception, Self-Control (Oxford: Oxford University Press, 1987)Google Scholar.

15 Mele, A., ‘Self-Deception and Delusions’, European Journal of Analytic Philosophy (EUJAP) , 2 (2006), 109124 Google Scholar.

16 Erez, A., Johnson, D.E., and Judge, T.A., ‘Self-Deception as a Mediator of the Relationship between Dispositions and Subjective Well-Being’, Personality and Individual Differences 19 (1995), 597–61CrossRefGoogle Scholar.

17 Khalil, E.L., ‘Symbolic Products: Prestige, Pride and Identity Goods’, Theory and Decision 49 (2000), 5377 CrossRefGoogle Scholar.

18 Op. cit. note 13.

19 Op. cit. note 14.

20 Davidson, D., ‘Deception and Division’, in Elster, Jon (ed.) The Multiple Self (Cambridge: Cambridge University Press, 1986, 7992)Google Scholar.

21 Davidson, D., ‘Who Is Fooled?’, in Dupuy, J.P. (ed.) Self-Deception and Paradoxes of Rationality (Stanford: CSLI Publications, 1998)Google Scholar.

22 Pears, D., Motivated Irrationality (Oxford: Clarendon Press, 1984)Google Scholar.

23 Pears, D., ‘The Goals and Strategies of Self-Deception’, in Elster, J. (ed.) The Multiple Self (Cambridge: Cambridge University Press, 1986)Google Scholar.

24 Taylor, S.E., Positive Illusions: Creative Self-Deception and the Healthy Mind (New York: Basic Books, 1989)Google Scholar.

25 Oxoby, R.J., ‘Attitudes and Allocations: Status, Cognitive Dissonance, and The Manipulation of Attitudes’, Journal of Economic Behavior and Organization 52 (2003), 365–85CrossRefGoogle Scholar.

26 Oxoby, R.J., ‘Cognitive Dissonance, Status, and the Growth of the Underclass’, Economic Journal 114 (2004), 727749 CrossRefGoogle Scholar.

27 Bénabou, R. and Tirole, J., ‘Identity, Morals, and Taboos: Beliefs as Assets’, Quarterly Journal of Economics 126 (2011), 805855 CrossRefGoogle ScholarPubMed.

28 Aronson, E., The Social Animal (San Francisco: Freeman Press, 1994)Google Scholar.

29 Op. cit. note 27.

30 Young, H.P., ‘Self-Knowledge and Self-Deception’ , unpublished (2007), University of Oxford Google Scholar.

31 Aktipis, C.A., ‘An Evolutionary Perspective on Consciousness: The Role of Emotion, Theory of Mind and Self-Deception’, The Harvard Brain 7 (2000), 2934 Google Scholar.

32 Kurzban, R. and Aktipis, C.A., ‘Modularity and the Social Mind: Are Psychologists too Selfish?’, Personality and Social Psychology Review 11 (2007), 131149 CrossRefGoogle Scholar.

33 E.L. Khalil and A. Marciano, ‘A Theory of Tasteful and Distasteful Transactions’, Kyklos (2017), forthcoming.

34 Sachdeva, S., Iliev, R., and Medin, D.L., ‘Sinning Saints and Saintly Sinners: The Paradox of Moral Self-Regulation’, Psychological Science 20 (2009), 523528 CrossRefGoogle ScholarPubMed.

35 Merritt, A., Effron, D.A., and Monin, B., ‘Moral Self-Licensing: When Being Good Frees us to be Bad’, Social and Personality Psychology Compass 4/5 (2010), 344357 CrossRefGoogle Scholar.

36 Khan, U. and Dhar, R., ‘Licensing Effect in Consumer Choice’, Journal of Marketing Research 43 (2006), 259266 CrossRefGoogle Scholar.

37 Op cit. note 35, p. 344.

38 The literature on moral licensing, although nascent and limited, usually fails to define what is moral and what is an immoral act. So, it tends to conflate moral licensing with another phenomenon, viz. equilibrating the marginal utility of different acts. People substitute at the margin different action in order to maximise the overall objective function. For instance, one can become excessively benevolent to others on one day, which would crowd out benevolent actions in the following day. Here, the agent is not using moral credentials as a license to act badly. The agent is rather balancing the scarce resources between two competing preferences, the wellbeing of others vs. the well-being of the self.

39 Akerlof, G.A. and Dickens, W.T., ‘The Economic Consequences of Cognitive Dissonance’, American Economic Review 72 (1982), 307319 Google Scholar.

40 Festinger, L., A Theory of Cognitive Dissonance (Stanford, CA: Stanford University Press, 1957)Google Scholar.

41 Festinger, L., ‘Some Attitudinal Consequences of Forced Decisions’, Acta Psychologica 15 (1959), 389390 CrossRefGoogle Scholar.

42 Morris, J. A. and Feldman, D.C., ‘The Dimensions, Antecedents, and Consequences of Emotional Labor’, Academy of Management Review 21 (1996), 9861010 Google Scholar.

43 Lerner, M.J. and Simmons, C.H., ‘Observer's Reaction to the “Innocent Victim”: Compassion or Rejection’, Journal of Personality and Social Psychology 4 (1996), 203210 CrossRefGoogle ScholarPubMed.

44 Lerner, M.J., The Belief in a Just World: A Fundamental Delusion (New York: Plenum Press, 1980)CrossRefGoogle Scholar.

45 Akerlof, G.A., ‘The Economics of Illusion’, Economics & Politics 1 (1989), 115 CrossRefGoogle Scholar.

46 Rabin, M., ‘Cognitive Dissonance and Social Change’, Journal of Economic Behavior and Organization 23 (1994), 177194 CrossRefGoogle Scholar.

47 Khalil, E.L., ‘Temptations as Impulsivity: How Far are Regret and the Allais Paradox from Shoplifting?’, Economic Modelling 51 (2015), 551559 CrossRefGoogle Scholar.

48 Konow, J., ‘Fair Shares: Accountability and Cognitive Dissonance in Allocation Decisions’, American Economic Review 90 (2000), 10721091 CrossRefGoogle Scholar.

49 Dana, J., Weber, R., and Kuang, J., ‘Exploiting Moral Wriggle Room: Experiments Demonstrating an Illusory Preference for Fairness’, Economic Theory 33 (2007), 6780 CrossRefGoogle Scholar.

50 Larson, T. and Capra, C.M., ‘Exploiting Moral Wiggle Room: Illusory Preference for Fairness? A Comment’, Judgment and Decision Making 4 (2009), 467474 Google Scholar.

51 Engels, F., ‘Letter to Franz Mehring (1893)’, in Marx and Engels Correspondence (New York: International Publishers, 1968)Google Scholar.

52 Gramsci, A., Selections from the Prison Notebooks of Antonio Gramsci (New York: International Publishers, 1971)Google Scholar.

53 Kuran, T., Private Truths, Public Lies: The Social Consequences of Preference Falsification (Cambridge, MA: Harvard University Press, 1995)Google Scholar.

54 Khalil, E.L., ‘Review: Timur Kuran's Private Truths, Public Lies: The Social Consequences of Preference Falsification ’, Southern Economic Journal 63 (1996), 269270 CrossRefGoogle Scholar.

55 Op. cit. note 28.

56 Nisbett, R.E. and Wilson, T.D., ‘Telling More Than We Can Know: Verbal Reports on Mental Processes’, Psychological Review 84 (1977), 231259 CrossRefGoogle Scholar.

57 Wilson, T.D., Strangers to Ourselves: Discovering the Adaptive Unconscious (Cambridge, MA: Belknap Press, 2002)Google Scholar.

58 Wilson, T.D. and Dunn., E.W.Self-Knowledge: Its Limits, Value, and Potential for Improvement’, Annual Review of Psychology 55 (2004), 493518 CrossRefGoogle Scholar.

59 Wegner, D.M., ‘Self is Magic’, in Baer, J., Kaufman, J.C., and Baumeister, R.F. (eds) Are we Free? Psychology and Free Will (New York: Oxford University Press, 2008)Google Scholar.

60 Pronin, E., Wegner, D.M., McCarthy, K., and Rodriguez, S., ‘Everyday Magical Powers: The Role of Apparent Mental Causation in the Overestimation of Personal Influence’, Journal of Personality and Social Psychology 91 (2006), 218231 CrossRefGoogle ScholarPubMed.

61 Johansson, P., Hall, L., Sikström, S., Tärning, B., and Lind, A., ‘How Something Can be Said about Telling More Than We Can Know: On Choice Blindness and Introspection’, Consciousness and Cognition 15 (2006), 673692 CrossRefGoogle ScholarPubMed.

62 Pyszczynski, T., Greenberg, J., and Solomon, S., ‘A Dual-Process Model of Defense Against Conscious and Unconscious Death-Related Thoughts: An Extension of Terror Management Theory’, Psychological Review 106 (1999), 835845 CrossRefGoogle ScholarPubMed.

63 Fischhoff, B. and Beyth, R., ‘“I Knew It Would Happen”: Remembered Probabilities of Once-Future Things’, Organizational Behaviour and Human Performance 13 (1975), 116 CrossRefGoogle Scholar.

64 Roese, N.J. and Vohs, K.D., ‘Hindsight Bias’, Perspectives on Psychological Science 7 (2012), 411426 CrossRefGoogle ScholarPubMed.

65 Yuan, J., Sun, G.-Z., and Siu, R., ‘The Lure of Illusory Luck: How Much Are People Willing to Pay For Random Shocks’, Journal of Economic Behavior and Organization 106 (2014), 269280 CrossRefGoogle Scholar.