Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-11T09:32:29.309Z Has data issue: false hasContentIssue false

A related proposal: An interactionist perspective on reason

Published online by Cambridge University Press:  27 March 2018

Hugo Mercier*
Affiliation:
Institut des Sciences Cognitives – Marc Jeannerod, CNRS UMR 5307, 69675 Bron, France.hugo.mercier@gmail.comhttps://sites.google.com/site/hugomercier/

Abstract

This comment introduces the interactionist perspective on reason that Dan Sperber and I developed. In this perspective, reason is a specific cognitive mechanism that evolved so that humans can exchange justifications and arguments with each other. The interactionist perspective significantly aligns with Doris's views in rejecting reflectivism and individualism. Indeed, I suggest that it offers different, and maybe stronger arguments to reject these views.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Doris's admirable book brings to bear on the question of moral agency insights from many disciplines. In particular, Doris relies on various psychological findings to question standard, individualistic theories of moral agency, and offers an alternative in which moral agency is partly the result of social interactions. In this comment I would like to draw attention to a recently developed theory of reason (Mercier & Sperber, Reference Mercier and Sperber2017) and suggest that it not only significantly converges with Doris's conclusions, but also pushes them further in several directions.

Dan Sperber and I have developed a theory of human reason, offering a new understanding of what reason is and of its evolutionary functions. In this theory, reason is a cognitive mechanism – a module, or set of modules – that is dedicated to the evaluation and production of reasons. Although it is tempting to equate reason so understood with the System 2 of dual process theories, there are in fact significant differences.

We suggest reason is “just” another inferential mechanism, one that does not supersede all of the other mechanisms, and one that shares most of their properties (by contrast with the quasi-homunculus which System 2 tends to turn into). As other inferential mechanisms, finding and evaluating reasons is, in most cases, quasi-effortless and automatic (think of how hard it would be to avoid understanding a reason as a reason when confronted with one, or to avoid thinking of reasons when your views are challenged), and largely intuitive. This last point is especially important: Reason delivers intuitions about the quality of reasons. When we look for reasons, or encounter reasons offered by others, we typically have an immediate intuition regarding their quality (i.e., how much support they offer whatever representation they are offered as a support of). Like other intuitions, these intuitions are opaque to us. In some cases we can provide reasons for our intuitions about reasons, but the chain must stop pretty quickly. Developing further reasons to support even the most seemingly mundane reasons has provided philosophers with job opportunities for centuries. For instance, most people might provide as a reason for believing that Everest is the tallest mountain on Earth that they have read that in an authoritative source. But why is that a good reason? Because this authoritative source is usually right . . . but why is that a good reason? And so forth. To put it differently, even the most explicit reasons rely on implicit premises that cannot all be made explicit.

In this framework, reason is just one cognitive mechanism among a great many others, and it is these other mechanisms that are responsible for the vast majority of our actions and beliefs. It would be impossible for reason to offer an accurate account of the functioning of these mechanisms – after all, we are talking about the human mind, the most complex computational mechanisms ever evolved (that we know of). Instead, reason can at best focus on some of the factors that might justify our actions or beliefs. It could not plausibly give an exhaustive account of these factors.

This suggests that Doris's criticism of reflectivism is too generous. Reflectivism, in a strong form at least, is psychologically implausible for two reasons. The first is that our minds are simply too complex for one very small part of them (reason) to be able to understand the whole of the rest. The second is that reasons are always partly implicit, so that reflectivism is necessarily partial (we might be able to provide some reasons for our actions, but not to justify why these are good reason, etc.).

With this in mind, the cases of incongruent parallel processing brought up by Doris as arguments against reflectivism seem less striking. If we accept that the production of reasons as justifications is necessarily deeply imperfect, then the reasons provided by people in cases of incongruent parallel processing are not much worse than many reasons that might seem, a priori, fine. Consider two voters. One voted for Candidate Creepy even though his name was second on the ballot, but has only a weak preference for this candidate. The second voter also chose Candidate Creepy, but would have voted for Candidate Normal had she been first on the ballot. Both give the same reasons for justifying their choice. It might be that the set of psychological factors that led to these two choices are quasi-identical. Our first voter might simply have had a very slightly stronger preference for Candidate Creepy. In neither case are the positions on the ballot the most relevant factors. Even for the second voter, the relevant factors would be those that led her to have no strong preference between the two candidates (otherwise she would have picked whatever candidate was strongly favored, irrespective of the order on the ballot). As a result, when both voters present the same reasons to support their choice, the first is being barely more accurate than the second. Singling out cases of incongruent parallel processing might be persuasive because the examples are striking, but it skirts the largest problem looming over reflectivism.

The theory Dan Sperber and I developed thus bolsters Doris's attack on reflectivism. It also considerably strengthens his case regarding the importance of interaction for establishing moral agency. Our theory suggests that human reason would have evolved chiefly to serve two related functions, which are both social. The first (and most relevant here) would be to justify our actions, and evaluate others' justifications, so that we can evaluate one another more accurately. The second would be to offer arguments for our beliefs, and to evaluate others' arguments, so that we can communicate more efficiently.

The social functions of reasons might help explain why reflectivism is an intuitively appealing theory of moral agency. The justifications people offer suffer from the flaws described above: They are partial, and they necessarily contain implicit premises. But that does not stop them from being helpful justifications in a social setting. Following on Doris's example, imagine that our voters offered as justifications for voting for Candidate Creepy that they think he can restore America's lost greatness. This is socially helpful in several ways. First, as Doris reminds us, the voters suggest that they are rational agents, who should thus be sensitive to reason. Second, it provides some insight into the actual factors that caused their actions. Even for the voter who was influenced by the name order, the reason provided might have been one of the actual factors that led him to support Candidate Creepy. Third, it binds the voters to some extent, so that they should pay a social cost if they started deriding another candidate for wanting to restore America's lost greatness.

If reasons can play this role, it is because people intuitively understand that “restoring America's lost greatness” can be a reason for supporting the candidate whose platform that is. They might disagree, but they understand – everybody has the trivial intuition that making something we like good again is a good thing. That's why people do not realize that part of the reason is implicit: They intuitively fill in the blanks. In turn, this is what makes reflectivism intuitively plausible.

If our account might help explain why reflectivism is intuitively plausible, it also resolutely makes of reason a social mechanism. Reasons are for social consumption. One reason our account might not converge fully with Doris's collaborativism is that it considers also cases in which the relation between the parties is more adversarial. Although we do not expect the exchange of reasons to be productive when the parties have no common incentives whatsoever (as in a poker game, say), reasons are most helpful when full collaboration cannot be expected either. It is when people interact with others they do not fully trust that they can most benefit from the exchange of reasons. If agents trusted each other fully, they should be charitable in understanding each other, and would have less need to justify their actions. By contrast, in most cases our default when interpreting others' behavior is to be uncharitable (Malle Reference Malle2006), and justifications can help revise initially uncharitable estimates. This is one of the reasons why we dubbed our account of reasoning “interactionist” rather than collaborativist (even if we agree that the exercise of reason has to be mostly cooperative to be evolutionarily plausible).

As I hope this brief comment makes clear, Doris's account and ours converge in the rejection of reflectivism and individualism, and can likely strengthen each other.

References

Malle, B. F. (2006) The actor-observer asymmetry in attribution: A (surprising) meta-analysis. Psychological Bulletin 132(6):25.Google Scholar
Mercier, H. & Sperber, D. (2017) The enigma of reason. Harvard University Press.Google Scholar