In their target article, von Hippel & Trivers (VH&T) invoke a dual-process framework, arguing that self-deception is facilitated by dissociations between implicit and explicit memories, attitudes, and processes. However, they focus on work in learning and social psychology and say relatively little about dual-process theories of reasoning and judgment. Such theories are, however, highly relevant to VH&T's project, and in this commentary I add some further supporting considerations, drawing on work in this area.
There is now considerable evidence for the existence of two distinct but interacting types of processing in human reasoning and decision making: type 1, which is fast, effortless, automatic, unconscious, inflexible, and contextualized, and type 2, which is slow, effortful, controlled, conscious, flexible, and decontextualized (e.g., Evans Reference Evans2007; Evans & Over Reference Evans and Over1996; Kahneman & Frederick Reference Kahneman, Frederick, Gilovich, Griffin and Kahneman2002; Sloman Reference Sloman1996; Stanovich Reference Stanovich1999; Reference Stanovich2004; for surveys, see Evans Reference Evans2008; Frankish & Evans Reference Frankish, Evans, Evans and Frankish2009; Frankish Reference Frankish2010). Beyond this core agreement there is much debate, concerning the further properties of the two processes, the relations between them, and whether they are associated with distinct neural systems (see, e.g., the papers in Evans & Frankish Reference Evans and Frankish2009). Here, however, I wish to focus on a specific proposal about the nature of type 2 processing.
The proposal is that type 2 processing is best thought of as an internalized, self-directed form of public argumentation, and that it is a voluntary activity – something we do rather than something that happens within us (Frankish Reference Frankish2004; Reference Frankish, Evans and Frankish2009; see also Carruthers Reference Carruthers2006; Reference Carruthers, Evans and Frankish2009a; Dennett Reference Dennett1991). It might involve, for example, constructing arguments in inner speech, using sensory imagery to test hypotheses and run thought experiments, or interrogating oneself in order to stimulate one's memory. On this view, type 2 thinking is motivated; we perform the activities involved because we desire to find a solution to some problem and believe that these activities may deliver one. (Typically, these metacognitive beliefs and desires will be unconscious, implicit ones.) I have argued elsewhere that this view provides an attractive explanation of why type 2 thinking possesses the distinctive features it does (Frankish Reference Frankish, Evans and Frankish2009).
On this view, we also have some control over our conscious mental attitudes. If we can regulate our conscious thinking, then we can decide to treat a proposition as true for the purposes of reasoning and decision making, committing ourselves to taking it as a premise in the arguments we construct and to assuming its truth when we evaluate propositions and courses of action. Such premising commitments constitute a distinct mental attitude, usually called “acceptance” (Bratman Reference Bratman1992; Cohen Reference Cohen1992; Frankish Reference Frankish2004). When backed with high confidence, acceptance may be regarded as a form of belief (Frankish Reference Frankish2004), but it can also be purely pragmatic, as when a lawyer accepts a client's innocence for professional purposes. Such pragmatic acceptance will, however, be functionally similar to belief, and it will guide inference and action, at least in contexts where truth is not of paramount importance to the subject.
If this is right, then it points to further powerful avenues of self-deception, involving biased reasoning, judgment, and acceptance. Our reasoning activities and premising policies may be biased by our self-deceptive goals, in pursuit either of specific ends or of general self-enhancement. We may be motivated to display greater effort and inventiveness in finding arguments for conclusions and decisions we welcome and against ones we dislike. And we may accept or reject propositions as premises based on their attractiveness rather than their evidential support. Of course, if we accept a claim in full awareness that we are doing so for pragmatic reasons, then no self-deception is involved; our attitude is like that that of the lawyer. Self-deception enters when we do not consciously admit our aims, and engage in biased reasoning and the other forms of self-deception described by VH&T (biased search, biased interpretation, etc.) in order to support the accepted claim.
I have previously set out this view of self-deception at more length (Frankish Reference Frankish2004, Ch. 8). However, I there assumed that the function of self-deceptive acceptance was primarily defensive. I referred to it as a “shielding strategy” designed to protect one from consciously facing up to an uncomfortable truth. But biased reasoning and acceptance could equally facilitate interpersonal deception, in line with VH&T's view. To accept a proposition as a premise is, in effect, to simulate conscious belief in it, both inwardly, in one's conscious reasoning and decision making, and outwardly, in one's behavior (so far as this is guided by one's conscious thinking). In doing this, one would display the signals of genuine belief to others whom one might wish to deceive. Moreover, these signals of belief might have an influence upon oneself as well, being taken by unconscious belief-forming processes as evidence for the truth of the accepted proposition (a sort of self-generated testimony) and thus fostering implicit belief in it. In this way, a deception that begins at the conscious level may later extend to the unconscious one, thereby eliminating any unconscious signals of deceptive intent.
Biased conscious thinking and acceptance are closely related to the information-processing biases discussed by VH&T, and they should be detectable by similar means. In particular, where they serve the goal of self-enhancement, they should be reduced when prior self-affirmation has taken place (provided, that is, that the deception has not taken root at the unconscious level, too). Experimental manipulations of cognitive load might also be employed to detect self-deceptive bias. There are complications here, however. For although biased conscious thinking will be effortful and demanding of working memory, it will not necessarily be more demanding than the unbiased sort, which, on the view we are considering, is also an effortful, intentional activity. However, when self-deception involves specific deviations from established reasoning strategies and premising policies, it will require additional self-regulatory effort, and in these cases, manipulations of cognitive load should affect it. “Talk-aloud” and “think-aloud” protocols, in which subjects are asked to verbalize and explain their thought processes, should also be useful in helping to identify self-deceptive biases in conscious thinking.
In their target article, von Hippel & Trivers (VH&T) invoke a dual-process framework, arguing that self-deception is facilitated by dissociations between implicit and explicit memories, attitudes, and processes. However, they focus on work in learning and social psychology and say relatively little about dual-process theories of reasoning and judgment. Such theories are, however, highly relevant to VH&T's project, and in this commentary I add some further supporting considerations, drawing on work in this area.
There is now considerable evidence for the existence of two distinct but interacting types of processing in human reasoning and decision making: type 1, which is fast, effortless, automatic, unconscious, inflexible, and contextualized, and type 2, which is slow, effortful, controlled, conscious, flexible, and decontextualized (e.g., Evans Reference Evans2007; Evans & Over Reference Evans and Over1996; Kahneman & Frederick Reference Kahneman, Frederick, Gilovich, Griffin and Kahneman2002; Sloman Reference Sloman1996; Stanovich Reference Stanovich1999; Reference Stanovich2004; for surveys, see Evans Reference Evans2008; Frankish & Evans Reference Frankish, Evans, Evans and Frankish2009; Frankish Reference Frankish2010). Beyond this core agreement there is much debate, concerning the further properties of the two processes, the relations between them, and whether they are associated with distinct neural systems (see, e.g., the papers in Evans & Frankish Reference Evans and Frankish2009). Here, however, I wish to focus on a specific proposal about the nature of type 2 processing.
The proposal is that type 2 processing is best thought of as an internalized, self-directed form of public argumentation, and that it is a voluntary activity – something we do rather than something that happens within us (Frankish Reference Frankish2004; Reference Frankish, Evans and Frankish2009; see also Carruthers Reference Carruthers2006; Reference Carruthers, Evans and Frankish2009a; Dennett Reference Dennett1991). It might involve, for example, constructing arguments in inner speech, using sensory imagery to test hypotheses and run thought experiments, or interrogating oneself in order to stimulate one's memory. On this view, type 2 thinking is motivated; we perform the activities involved because we desire to find a solution to some problem and believe that these activities may deliver one. (Typically, these metacognitive beliefs and desires will be unconscious, implicit ones.) I have argued elsewhere that this view provides an attractive explanation of why type 2 thinking possesses the distinctive features it does (Frankish Reference Frankish, Evans and Frankish2009).
On this view, we also have some control over our conscious mental attitudes. If we can regulate our conscious thinking, then we can decide to treat a proposition as true for the purposes of reasoning and decision making, committing ourselves to taking it as a premise in the arguments we construct and to assuming its truth when we evaluate propositions and courses of action. Such premising commitments constitute a distinct mental attitude, usually called “acceptance” (Bratman Reference Bratman1992; Cohen Reference Cohen1992; Frankish Reference Frankish2004). When backed with high confidence, acceptance may be regarded as a form of belief (Frankish Reference Frankish2004), but it can also be purely pragmatic, as when a lawyer accepts a client's innocence for professional purposes. Such pragmatic acceptance will, however, be functionally similar to belief, and it will guide inference and action, at least in contexts where truth is not of paramount importance to the subject.
If this is right, then it points to further powerful avenues of self-deception, involving biased reasoning, judgment, and acceptance. Our reasoning activities and premising policies may be biased by our self-deceptive goals, in pursuit either of specific ends or of general self-enhancement. We may be motivated to display greater effort and inventiveness in finding arguments for conclusions and decisions we welcome and against ones we dislike. And we may accept or reject propositions as premises based on their attractiveness rather than their evidential support. Of course, if we accept a claim in full awareness that we are doing so for pragmatic reasons, then no self-deception is involved; our attitude is like that that of the lawyer. Self-deception enters when we do not consciously admit our aims, and engage in biased reasoning and the other forms of self-deception described by VH&T (biased search, biased interpretation, etc.) in order to support the accepted claim.
I have previously set out this view of self-deception at more length (Frankish Reference Frankish2004, Ch. 8). However, I there assumed that the function of self-deceptive acceptance was primarily defensive. I referred to it as a “shielding strategy” designed to protect one from consciously facing up to an uncomfortable truth. But biased reasoning and acceptance could equally facilitate interpersonal deception, in line with VH&T's view. To accept a proposition as a premise is, in effect, to simulate conscious belief in it, both inwardly, in one's conscious reasoning and decision making, and outwardly, in one's behavior (so far as this is guided by one's conscious thinking). In doing this, one would display the signals of genuine belief to others whom one might wish to deceive. Moreover, these signals of belief might have an influence upon oneself as well, being taken by unconscious belief-forming processes as evidence for the truth of the accepted proposition (a sort of self-generated testimony) and thus fostering implicit belief in it. In this way, a deception that begins at the conscious level may later extend to the unconscious one, thereby eliminating any unconscious signals of deceptive intent.
Biased conscious thinking and acceptance are closely related to the information-processing biases discussed by VH&T, and they should be detectable by similar means. In particular, where they serve the goal of self-enhancement, they should be reduced when prior self-affirmation has taken place (provided, that is, that the deception has not taken root at the unconscious level, too). Experimental manipulations of cognitive load might also be employed to detect self-deceptive bias. There are complications here, however. For although biased conscious thinking will be effortful and demanding of working memory, it will not necessarily be more demanding than the unbiased sort, which, on the view we are considering, is also an effortful, intentional activity. However, when self-deception involves specific deviations from established reasoning strategies and premising policies, it will require additional self-regulatory effort, and in these cases, manipulations of cognitive load should affect it. “Talk-aloud” and “think-aloud” protocols, in which subjects are asked to verbalize and explain their thought processes, should also be useful in helping to identify self-deceptive biases in conscious thinking.