Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T09:37:03.219Z Has data issue: false hasContentIssue false

Epistemic normativity from the reasoner's viewpoint

Published online by Cambridge University Press:  14 October 2011

Joëlle Proust
Affiliation:
Institut Jean-Nicod (EHESS-ENS), UMR CNRS 8129, Ecole Normale Supérieure, F-75005 Paris, France. jproust@ehess.frhttp://joelleproust.hautetfort.com/

Abstract

Elqayam & Evans (E&E) are focused on the normative judgments used by theorists to characterize subjects' performances (e.g. in terms of logic or probability theory). They ignore the fact, however, that subjects themselves have an independent ability to evaluate their own reasoning performance, and that this ability plays a major role in controlling their first-order reasoning tasks.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2011

Although theorists may not believe that there is a single right or wrong answer to a problem, reasoners often do. The target article seems to conclude from indeterminacy as to which normative system is being used by a participant, to the irrelevance of epistemic norms in reasoning. On the other hand, the need to accept norms in applied science is recognized. It is unclear what allows theoretical and practical claims (“pure” and “applied” science) to diverge in their most basic concepts.

Variety in norms can be understood in two ways: either in terms of several normative systems “fitting the bill” (target article, sect. 3), which is a problem for reasoning theorists; or in terms of which norm is most appropriate to a given epistemic task, which is a problem for individual reasoners. Note, however, that these two uses of “norm” are not clearly distinguished in the target article. When Elqayam & Evans (E&E) call a system that “fits the bill” “normative,” they mean that this system is appropriate, or optimal, for solving a given task. “Norm” can also be used, however, to refer to success or error within a given “normative system” – there are correct and incorrect ways of using a norm. The latter distinction can be clarified through the concept of a “constitutive rule,” that is, a rule that makes a particular cognitive task the task it is. Remembering accurately, remembering exhaustively, and checking whether a conclusion derives from a set of premises, each have different constitutive rules: Their outputs count as appropriate if they are, respectively, cases of accurate recall, exhaustive recall, and coherence tracking (Proust, in press).

This distinction between two uses of “norm” has consequences for the issue of normative conflict. By this term, E&E refer to the existence of several alternative ways of interpreting what a reasoner does in a given task. Such cases, however, generate other types of norm conflict that a theorist should not ignore. For example, in “belief bias” tasks, participants need to be sensitive to norms of deductive coherence rather than to other norms such as fluency or relevance of epistemic content (Evans et al. Reference Evans, Barston and Pollard1983). The epistemic norm of interest, in this case, does not consist in a set of optimal procedures for solving a problem (a “normative system”), but rather in the informational constraints inherent to the cognitive goal embedded in the task: syllogistic closure rather than believability. Conflict may occur when several cognitive goals compete for saliency (for a task, for a participant). Participants need to draw on their prior experience to build a representation of the task (i.e., of its structure and cognitive goal), which may not be stable, and may not coincide with the experimenter's.

The latter kind of norm conflict can be studied on the basis of participants' self-evaluation in a cognitive task covertly involving norm competition (conflict monitoring is one of the main functions of metacognition; Botvinick et al. Reference Botvinick, Braver, Barch, Carter and Cohen2001). Such a study belongs to the metacognition of reasoning rather than to reasoning per se, but the case of metadeduction suggests that control and monitoring processes play a considerable role in how a first-order task is processed (Reverberi et al. Reference Reverberi, Shallice, D'Agostinie, Skrape and Bonatti2009). Note that sensitivity to a given norm need not be based on a conceptual type of understanding; familiarity with the task brings with it implicit access to the epistemic norm that constitutes it as the cognitive task it is (Proust, in press). A metacognitive study of reasoning, however, does not seem to be threatened by an is-ought fallacy, because the norm of interest is expressed in participants' spontaneous self-evaluations and subsequent revisions. Metacognitive reasoners are motivated to correct what they see as a mistake; when they predict that a task is beyond their competence, they decline it (or wager against it in retrodictive evaluation) if they are allowed to (Koriat & Goldsmith Reference Koriat and Goldsmith1996; Smith et al. Reference Smith, Shields and Washburn2003).

Should a naturalist reject the participants' normative sense of error as fallacious? On the present construal of error as a violation of a constitutive rule, no appeal to a priori or irreducibly normative facts needs to be made. Natural regularities, such as feedback and regulation laws, can account for a subject's sense of error (Proust Reference Proust, Leitgeb and Hieke2009). On this view, violating constitutive rules cannot be a matter of individual preference, as some naturalists have claimed (Dretske Reference Dretske2000; Papineau Reference Papineau1999). From the observation that many different cognitive goals can be entertained, it is tempting to conclude that the epistemic norms can be chosen too: forming false beliefs might be a matter of preferences. Instrumental reasons to control one's cognition (e.g., “I need to remember her name”) must, however, be distinguished from the normative requirements associated with the chosen type of control (remembering a name is adequate if it is correct). This reflects a contrast, as shown by Broome (Reference Broome1999), between a reason to act and a normative requirement. The first is an “ought” so far as it goes: you may or may not be right in thinking you need to remember this name. A normative requirement, in contrast, is “strict, but relative” (relative to your attempt to remember this name, you are strictly required to find the correct answer).

If these observations are correct, there are alternative normative systems when there are various instrumental ways of solving a problem. Such inter- and intra-individual differences in strategies can be studied using brain imagery (Goel & Dolan Reference Goel and Dolan2003; Houdé & Tzourio-Mazoyer Reference Houdé and Tzourio-Mazoyer2003; Osherson et al. Reference Osherson, Perani, Cappa, Schnur, Grassi and Fazio1998). On the other hand, participants' attempts to solve the problem within a system (logic, probability theory, etc.), involve strict constitutive requirements to which subjects need to be sensitive – and do become sensitive over time. A naturalistic explanation of epistemic norms can then be offered, on the basis of how reasoners monitor and control their own epistemic outputs. Eliminating norms from reasoning would amount to throwing out the baby with the bathwater: The reasoners and their motivation to obtain a correct answer should be of theoretical, and not just practical interest.

References

Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S. & Cohen, J. D. (2001) Conflict monitoring and cognitive control, Psychological Review 108(3):624–52.CrossRefGoogle ScholarPubMed
Broome, J. (1999) Normative requirements. Ratio 12:398419.CrossRefGoogle Scholar
Dretske, F. (2000) Norms, history, and the constitution of the mental. In: Perception, knowledge and belief. pp. 242–58. Cambridge University Press.CrossRefGoogle Scholar
Evans, J. St. B. T., Barston, J. L. & Pollard, P. (1983) On the conflict between logic and belief in syllogistic reasoning. Memory and Cognition 11:295306.Google ScholarPubMed
Goel, V. & Dolan, R. J. (2003) Explaining modulation of reasoning by belief. Cognition 87:B11B22.CrossRefGoogle ScholarPubMed
Houdé, O. & Tzourio-Mazoyer, N. (2003) Neural foundations of logical and mathematical cognition. Nature Reviews Neuroscience 4:507–14.CrossRefGoogle ScholarPubMed
Koriat, A. & Goldsmith, M. (1996) Monitoring and control processes in the strategic regulation of memory accuracy. Psychological Review 103(3):490517.CrossRefGoogle ScholarPubMed
Osherson, D., Perani, D., Cappa, S., Schnur, T., Grassi, F. & Fazio, F. (1998) Distinct brain loci in deductive versus probabilistic reasoning. Neuropsychologia 36(4):369–76.CrossRefGoogle ScholarPubMed
Papineau, D. (1999) Normativity and judgment. Proceedings of the Aristotelian Society, Supplementary Volumes 73:1743.CrossRefGoogle Scholar
Proust, J. (2009) Adaptive control loops as an intermediate mind-brain reduction basis. In: Reduction and elimination in philosophy of mind and philosophy of neuroscience, ed. Leitgeb, H. & Hieke, A., pp. 191219. Ontos.Google Scholar
Proust, J. (in press) Mental acts as natural kinds. In: Decomposing the Will, ed. Vierkant, T., Clark, A. & Kiverstein, J.. Oxford University Press.Google Scholar
Reverberi, C., Shallice, T., D'Agostinie, S., Skrape, M. & Bonatti, L. L. (2009) Cortical bases of elementary deductive reasoning: Inference, memory, and metadeduction. Neuropsychologia 47:1107–16.CrossRefGoogle ScholarPubMed
Smith, J. D., Shields, W. E. & Washburn, D. A. (2003) The comparative psychology of uncertainty monitoring and metacognition. Behavioral and Brain Sciences 26(3):317–73.CrossRefGoogle ScholarPubMed