Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-11T08:59:56.480Z Has data issue: false hasContentIssue false

Letting rationalizations out of the box

Published online by Cambridge University Press:  15 April 2020

Philip Pärnamets
Affiliation:
Faculty of Philosophy, New York University, New York, NY10003. philip.parnamets@nyu.eduhttps://philipparnamets.github.io Division of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden
Petter Johansson
Affiliation:
Lund University Cognitive Science, Lund University, S-221 00, Lund, Sweden. petter.johansson@lucs.lu.sehttps://www.lucs.lu.se/choice-blindness-group/lars.hall@lucs.lu.sehttps://www.lucs.lu.se/choice-blindness-group/
Lars Hall
Affiliation:
Lund University Cognitive Science, Lund University, S-221 00, Lund, Sweden. petter.johansson@lucs.lu.sehttps://www.lucs.lu.se/choice-blindness-group/lars.hall@lucs.lu.sehttps://www.lucs.lu.se/choice-blindness-group/

Abstract

We are very happy that someone has finally tried to make sense of rationalization. But we are worried about the representational structure assumed by Cushman, particularly the “boxology” belief-desire model depicting the rational planner, and it seems to us he fails to accommodate many of the interpersonal aspects of representational exchange.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

In our work, we have studied rationalization using the choice blindness paradigm (Johansson et al. Reference Johansson, Hall, Sikström and Olsson2005). In a choice blindness experiment, participants make choices, the outcomes of which are surreptitiously manipulated to create mismatches to the participants’ original selection. Participants often fail to notice this, and instead give detailed and coherent rationalizations for choices they never made. Moreover, research has shown that having participants accept false feedback about their responses and then rationalizing these responses can cause their attitudes to markedly shift in future choices (Johansson et al. Reference Johansson, Hall, Tärning, Sikström and Chater2014; Luo & Yu Reference Luo and Yu2017; Strandberg et al. Reference Strandberg, Sivén, Hall, Johansson and Pärnamets2018), which seems like just the kind of adaptive construction of attitudes that Cushman posits. However, when trying to interpret our experiments in the light of representational exchange, we find a lot of blur in and between the central boxes of the theory.

It is not clear to us how the proposed theory handles the observed patterns of responses in experiments like ours. For example, some participants, while accepting the false feedback, later repeat their original decision, thus seemingly ignoring the self-generated arguments they just gave for the alternative option. Is it then these participants who best use the implicit information from their original decision mechanisms (prefer X not Y)? In other cases, some people in our studies, while accepting the false outcome, struggle to rationalize and sincerely say things like “I don't know/I'm not sure/I have no clue why Y.” What should we make of these cases? If rationalization is rational, should we take these silent individuals, in a stunning reversal of previous canon, to be the irrational ones? In other words: When we attempt to take the theory seriously on its own terms, we seem to run into difficulties connecting it back to empirical results.

The blur between the modules is even better seen in Hall et al. (Reference Hall, Johansson and Strandberg2012; Reference Hall, Strandberg, Pärnamets, Lind, Tärning and Johansson2013) and in Strandberg et al. (Reference Strandberg, Sivén, Hall, Johansson and Pärnamets2018), where we have studied choices and rationalizations about moral and political choices. What stands out about these domains is the supposed involvement of the rational planning system. Clearly, people can harbor strong moral and political intuitions, which might be embellished and justified post hoc (as suggested by Haidt; Reference Haidt2001), but undeniably, our political and moral opinions are also the products of explicit rational argumentation, discourse, and thought (Cushman Reference Cushman2013; Rawls Reference Rawls1971). Thus, in these experiments we seemingly have the rational planning system rationalizing actions performed by itself.

But how could this work in proposed model, where the planner supposedly is the hub of all information flow? Could there really be a form of meta-rationalization about the rational plans we make? Or do we sometimes produce a form of “implicit” rationalizations, which are non-transparent to all the systems involved? In any case, Cushman needs to be more explicit about how his theory battles potential regresses and powerful homunculi piling on top of it, and about the localization of self and agency in all the potential layers of rationalization (Dennett Reference Dennett1991a). As far as we can see, the experimental examples described above do not fit any of the schemes for “hybrid” control described in the target article.

Broadening the perspective, it appears to us that Cushman's theory might underplay the complexity of the physical and social environment in which we act. Considering the old (but never aging) anti-representationalist slogan that “the world is its own best model” (Brooks Reference Brooks1991; Dreyfus Reference Dreyfus2007), it would appear that the results of our actions (telling us what we did) and the conditions motivating them (telling us why we did them) often will be evident directly in our immediate sensory environment, thus alleviating some of the need for internal information monitoring and exchange. Similarly, humans are embedded not only in a stable external world, but also in a social world. This world is imbued with a measure of stability by virtue of the various conventions (Lewis Reference Lewis1969/2008) and norms (Bicchieri Reference Bicchieri2005) that anchor our practices in a shared, interpersonal life-world (Wittgenstein Reference Wittgenstein1953/2009; Von Uexküll Reference Von Uexküll1934/2010). Our folk psychological practices are communal (Dennett Reference Dennett1991b; Wittgenstein Reference Wittgenstein1953/2009) and indeed adapted to reason giving and coordination between agents – talk that invariably involves telling others (and ourselves) about what we are doing and why.

To us, this suggests an interpersonal component currently missing in the proposed theory, and one that differs from other proposals also emphasizing important aspects of the social grounding of rationalization (Mercier & Sperber Reference Mercier and Sperber2011; Trivers Reference Trivers2000). Rationalization – and generally the casting of action into intentional language – might function by mediating representational exchange between agents. Agents construct meaning idiosyncratically as a function of their life histories (Freeman Reference Freeman1997), but need to share the hows and whys to form the communities and conventions that have allowed them to thrive. Perhaps some of the problems, sketched above, for the theory of representational exchange viewed as a flow of information only from certain decision systems to the rational planner, will dissolve if also taking into account the additional adaptive pressure of exchange between agents?

With this perspective, it becomes clear that we, and Cushman, need to consider the possibility of adaptive mismatches. If rationalization is an evolved adaptation, then it is an adaptation for a particular context and environment (EEA, or environment of evolutionary adaptedness; see Tooby & Cosmides Reference Tooby, Cosmides, Barkow, Cosmides and Tooby1992). But our modern environment of runaway information exchange and altered conditions for social interaction (group size, frequency, etc.) might differ considerably from the EEA. The power of the propositional attitudes of folk psychology (whether leveraged to understand ourselves or others) lies in compressing and abstracting information from the messy underlying systems (Dennett Reference Dennett and Greenwood1991c; Reference Dennett1996), but as a communal practice, it can also create too hard-edged opinions, attitude bloat, and distinctions where none are needed, particularly in a social context of conversational demands. Thus, a rationalization system that once faithfully abstracted useful information from our own and others’ habitual, instinctual, as well as rational, actions now risks running in overdrive, with a real possibility that many day-to-day rationalizations are utter poppycock.

References

Bicchieri, C. (2005) The grammar of society: The nature and dynamics of social norms. Cambridge University Press.CrossRefGoogle Scholar
Cushman, F. (2013) Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review 17(3):273–92.CrossRefGoogle ScholarPubMed
Dennett, D. C. (1991a) Consciousness explained. Penguin.Google Scholar
Dennett, D. C. (1991b) Real patterns. The Journal of Philosophy 88(1):2751.CrossRefGoogle Scholar
Dennett, D. C. (1991c) Two contrasts: Folk craft versus folk science, and belief versus opinion. In: The future of folk psychology: Intentionality and cognitive science, ed. Greenwood, J. D., pp. 135–48. Cambridge University Press.Google Scholar
Dennett, D. C. (1996) Kinds of minds: Toward an understanding of consciousness. Basic Books.Google Scholar
Dreyfus, H. L. (2007) Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Philosophical Psychology 20(2):247–68.CrossRefGoogle Scholar
Freeman, W. J. (1997) Nonlinear neurodynamics of intentionality. Journal of Mind and Behavior 18(2/3):291304.Google Scholar
Haidt, J. (2001) The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108(4):814–34.CrossRefGoogle ScholarPubMed
Hall, L., Johansson, P. & Strandberg, T. (2012) Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PLOS ONE 7(9):e45457. Available at: https://doi.org/10.1371/journal.pone.0045457.CrossRefGoogle ScholarPubMed
Hall, L., Strandberg, T., Pärnamets, P., Lind, A., Tärning, B. & Johansson, P. (2013) How the polls can be both spot on and dead wrong: Using choice blindness to shift political attitudes and voter intentions. PLOS ONE 8(4):e60554.CrossRefGoogle ScholarPubMed
Johansson, P., Hall, L., Sikström, S. & Olsson, A. (2005) Failure to detect mismatches between intention and outcome in a simple decision task. Science 310(5745):116–19.CrossRefGoogle Scholar
Johansson, P., Hall, L., Tärning, B., Sikström, S. & Chater, N. (2014) Choice blindness and preference change: You will like this paper better if you (believe you) chose to read it! Journal of Behavioral Decision Making 27(3):281–89.CrossRefGoogle Scholar
Lewis, D. (1969/2008) Convention: A philosophical study. Wiley. (Original work published in 1969)Google Scholar
Luo, J. & Yu, R. (2017) The spreading of alternatives: Is it the perceived choice or actual choice that changes our preference? Journal of Behavioral Decision Making 30(2):484–91.CrossRefGoogle Scholar
Mercier, H. & Sperber, D. (2011) Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences 34(2):5774.CrossRefGoogle ScholarPubMed
Rawls, J. (1971) A theory of justice. Harvard university press.Google Scholar
Strandberg, T., Sivén, D., Hall, L., Johansson, P. & Pärnamets, P. (2018) False beliefs and confabulation can lead to lasting changes in political attitudes. Journal of Experimental Psychology: General 147(9):1382–99.CrossRefGoogle ScholarPubMed
Tooby, J. and Cosmides, L. (1992) The psychological foundations of culture. In: The adapted mind: Evolutionary psychology and the generation of culture, ed. Barkow, J., Cosmides, L. & Tooby, J., pp. 19136. Oxford University Press.Google Scholar
Trivers, R. (2000) The elements of a scientific theory of self-deception. Annals of the New York Academy of Sciences 907(1):114–31.CrossRefGoogle ScholarPubMed
Von Uexküll, J. (1934/2010) A foray into the worlds of animals and humans: With a theory of meaning. University of Minnesota Press. (Original work published in 1934)Google Scholar
Wittgenstein, L. (1953/2009) Philosophical investigations. Wiley. (Original work published in 1953)Google Scholar
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence 47(1–3):139–59.CrossRefGoogle Scholar