Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-11T12:21:38.019Z Has data issue: false hasContentIssue false

Two problems with “self-deception”: No “self” and no “deception”

Published online by Cambridge University Press:  03 February 2011

Robert Kurzban
Affiliation:
Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104. kurzban@psych.upenn.eduhttp://www.psych.upenn.edu/~kurzban/

Abstract

While the idea that being wrong can be strategically advantageous in the context of social strategy is sound, the idea that there is a “self” to be deceived might not be. The modular view of the mind finesses this difficulty and is useful – perhaps necessary – for discussing the phenomena currently grouped under the term “self-deception.”

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2011

I agree with a key argument in the target article, that the phenomena discussed under the rubric of “self-deception” are best understood as strategic (Kurzban, in press; Kurzban & Aktipis Reference Kurzban, Aktipis, Schaller, Simpson and Kenrick2006; Reference Kurzban and Aktipis2007). For a social species like humans, representations can play roles not just in guiding behavior, but also in manipulating others (Dawkins & Krebs Reference Dawkins, Krebs, Krebs and Davies1978). If, for example, incorrect representations in my head (about, e.g., my own traits) will contribute to generating representations in your head that I am a valuable social partner, then selection can act to bring about mechanisms that generate such incorrect representations, even if these representations are not the best estimate of what is true (Churchland Reference Churchland1987).

This is an important idea because generating true representations has frequently been viewed as the key – indeed only – job of cognition (Fodor Reference Fodor2000; Pears Reference Pears and Elster1985). True beliefs are obviously useful for guiding adaptive behavior, so claims that evolved computational mechanisms are designed to be anything other than as accurate as possible requires a powerful argument (McKay & Dennett Reference McKay and Dennett2009). Indeed, in the context of mechanisms designed around individual decision-making problems in which nature alone determines one's payoff, mechanisms designed to maximize expected value should be expected because the relentless calculus of decision theory punishes any other design (Kurzban & Christner, in press). However, when manipulation is possible and a false belief can influence others, these social benefits can offset the costs, if any, of false beliefs.

Despite my broad agreement with these arguments, I have deep worries about the implicit ontological commitments lurking behind constructions that animate the discussion in the target article, such as “deceiving the self, “convincing the self,” or “telling the self.” Because I, among others, do not think there is a plausible referent for “the self” used in this way (Dennett Reference Dennett1981; Humphrey & Dennett Reference Humphrey, Dennett and Dennet1998; Kurzban, in press; Kurzban & Aktipis Reference Kurzban and Aktipis2007; Rorty Reference Rorty and Elster1985), my concern is that referring to the self at best is mistaken and at worst reifies a Cartesian dualist ontology. That is, when “the self” is being convinced, what, precisely, is doing the convincing and what, precisely, is being convinced? Talk about whatever it is that is being deceived (or “controlled,” for that matter; Wegner Reference Wegner, Hassin, Uleman and Bargh2005) comes perilously close to dualism, with a homuncular “self” being the thing that is being deceived (Kurzban, in press).

So, the first task for self-deception researchers is to purge discussions of the “self” and discuss these issues without using this term. Modularity, the idea that the mind consists of a large number of functionally specialized mechanisms (Tooby & Cosmides Reference Tooby, Cosmides, Barkow, Cosmides and Tooby1992) that can be isolated from one another (Barrett Reference Barrett2005; Fodor Reference Fodor1983), does exactly this and grants indispensable clarity. For this reason, modularity ought to play a prominent role in any discussion of the phenomena grouped under the rubric of self-deception. Modularity allows a much more coherent way to talk about self-deception and positive illusions that finesses the ontological difficulty.

Consider the modular construal of two different types of self-deception. In the context of so-called “positive illusions” (Taylor Reference Taylor1989), suppose that representations contained in certain modules – but not others – “leak” into the social world. For such modules, the benefits of being correct – that is, having the most accurate possible representation of what is true in these modules – must be balanced against the benefits of persuasion (sect. 9). If representations that contain information about one's traits and likely future will be consumed by others, then errors in the direction that is favorable might be advantageous, offsetting the costs of error. For this reason, such representations are best understood not as illusions but as cases in which some very specific subset of modules that have important effects on the social world are designed to be strategically wrong, – that is, they generate representations that are not the best estimate of what is true, but what is valuable in the context of social games, especially persuasion.

Next, consider cases in which two mutually inconsistent representations coexist within the same head. On the modular view, the presence of mutually inconsistent representations presents no difficulties as a result of informational encapsulation (Barrett & Kurzban Reference Barrett and Kurzban2006). If one modular system guides action, then the most accurate representations possible should be expected to be retained in such systems. If another modular system interacts with the social world, then representations that will be advantageous if consumed by others should be stored there. These representations might, of course, be about the very same thing but differ in their content. As Pinker (Reference Pinker1997) put it, “the truth is useful, so it should be registered somewhere in the mind, walled off from the parts that interact with other people” (p. 421). One part of the mind is not “deceiving” another part; these modular systems are simply operating with a certain degree of autonomy.

The modular view also makes sense of another difficulty natural language introduces into discussion of self-deception, the folk concept of “belief” (e.g., Stitch 1983). If it is true that two modular systems might have representations about the very same thing, and that these two representations might be inconsistent, then it makes no sense to talk about what an agent “really,” “genuinely,” or “sincerely” believes. Instead, the predicate “believe” attaches to modular systems rather than people or other agents (Kurzban, in press). This has the added advantage of allowing us to do away with metaphorical terms like the “level” on which something is believed (sect. 7), and we can substitute a discussion of which representations are present in different modules. Again, this undermines the folk understanding of what it means to “believe” something, but such a move, taking belief predicates away from agents as a whole, is required on the modular view and helps clarify that the belief applies to modules, that is, parts of people's minds, rather than a person as a whole.

Generally, trying to understand self-deception with the conceptual tool of evolved function is an advance. Trying to understand self-deception without the conceptual tool of modularity is needlessly limiting.

References

Barrett, H. C. (2005) Enzymatic computation and cognitive modularity. Mind and Language 20:259–87.CrossRefGoogle Scholar
Barrett, H. C. & Kurzban, R. (2006) Modularity in cognition: Framing the debate. Psychological Review 113:628–47.Google Scholar
Churchland, P. (1987) Epistemology in the age of neuroscience. Journal of Philosophy 84:544–53.Google Scholar
Dawkins, R. & Krebs, J. (1978) Animal signals: Information or manipulation? In: Behavioural ecology: An evolutionary approach, ed. Krebs, J. & Davies, N., pp. 282309. Blackwell Scientific.Google Scholar
Dennett, D. (1981) Brainstorms: Philosophical essays on mind and psychology. MIT Press.CrossRefGoogle Scholar
Fodor, J. (1983) The modularity of mind. MIT Press.CrossRefGoogle Scholar
Fodor, J. (2000) The mind doesn't work that way. Bradford Books/MIT.Google Scholar
Humphrey, N. & Dennett, D. C. (1998) Speaking for our selves. In: Brainchildren: Essays on designing minds, ed. Dennet, D. C., pp. 3158. Penguin Books.Google Scholar
Kurzban, R. (in press) Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton University Press.Google Scholar
Kurzban, R. & Aktipis, C. A. (2006) Modular minds, multiple motives. In: Evolution and social psychology, ed. Schaller, M., Simpson, J. & Kenrick, D., pp. 3953. Psychology Press.Google Scholar
Kurzban, R. & Aktipis, C. A. (2007) Modularity and the social mind: Are psychologists too self-ish? Personality and Social Psychology Review 11:131–49.Google Scholar
Kurzban, R. & Christner, J. (in press) Are supernatural beliefs commitment devices for intergroup conflict? In: The psychology of social conflict and aggression (The Sydney Symposium of Social Psychology, vol. 13), ed. Forgas, J. P., Kruglanski, A. & Willimas, K. D..Google Scholar
McKay, R. T. & Dennett, D. C. (2009) The evolution of misbelief. Behavioral and Brain Sciences 32(6):493561.Google Scholar
Pears, D. (1985) The goals and strategies of self-deception. In: The multiple self, ed. Elster, J., pp. 5977. Cambridge University Press.Google Scholar
Pinker, S. (1997) How the mind works. Norton.Google Scholar
Rorty, A. O. (1985) Self-deception, akrasia and irrationality. In: The multiple self, ed. Elster, J., pp. 115–32. Cambridge University Press.Google Scholar
Stich, S. (1983) From folk psychology to cognitive science: The case against belief. Bradford.Google Scholar
Taylor, S. E. (1989) Positive illusions: Creative self-deception and the healthy mind. Basic Books.Google Scholar
Tooby, J. & Cosmides, L. (1992) The psychological foundations of culture. In: The adapted mind: Evolutionary psychology and the generation of culture, ed. Barkow, J. H., Cosmides, L. & Tooby, J., pp. 19136. Oxford University Press.CrossRefGoogle Scholar
Wegner, D. M. (2005) Who is the controller of controlled processes? In: The new unconscious, ed. Hassin, R., Uleman, J. S. & Bargh, J. A., pp. 1936. Oxford University Press.Google Scholar