Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T13:05:37.487Z Has data issue: false hasContentIssue false

Overlooking metacognitive experience

Published online by Cambridge University Press:  23 April 2009

Joëlle Proust
Affiliation:
Department of Cognitive Studies, Ecole Normale Supérieure, and Institut Jean-Nicod, EHESS and ENS, 75005 Paris, France. jproust@ehess.frhttp://joelleproust.hautetfort.com
Rights & Permissions [Opens in a new window]

Abstract

Peter Carruthers correctly claims that metacognition in humans may involve self-directed interpretations (i.e., may use the conceptual interpretative resources of mindreading). He fails to show, however, that metacognition cannot rely exclusively on subjective experience. Focusing on self-directed mindreading can only bypass evolutionary considerations and obscure important functional differences.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2009

Carruthers' main goal is to show that metacognition is a form of self-directed interpretation, akin to other-directed mindreading. Introspection, he claims, defined as “any reliable method for forming beliefs about one's own mental states that is not self-interpretative and that differs in kind from the ways in which we form beliefs about the mental states of other people” (sect. 1.4, para. 3, emphasis in the original), is not needed to have access to one's own mental attitudes. One can agree with the author that metacognition in humans may involve self-directed interpretations (i.e., may use the conceptual interpretative resources of mindreading), without accepting the stronger claim that metacognition can never be based on “introspection.”

In cognitive science, “metacognition” refers to the capacity through which a subject can evaluate the feasibility or completion of a given mental goal (such as learning a maze, or discriminating a signal) in a given episode (Koriat et al. Reference Koriat, Ma'ayan and Nussinson2006). In Carruthers' use, however, metacognition refers to first-person metarepresentation of one's own mental states; as a result, the theoretical possibility that metacognition might operate in a different representational format cannot be raised (Proust, in press b). Revising the meaning of a functional term such as “metacognition” is a bold strategy. It generally seems more adequate to leave it an open empirical matter whether a capacity of type A (reading one's own mind) or type B (evaluating one's cognitive dispositions) is engaged in a particular task. A revision is deemed necessary, according to Carruthers, because “B” capacities in fact always involve self-directed mindreading; therefore apparent contrary cases (self-evaluation in non-mindreading animals) either (1) are simply instances of first-order types of learning, and/or (2) are capacities “too weak to be of any interest” (Carruthers Reference Carruthers2008b, p. 62; cf. target article, sects. 5.1 and 9).

Two methodological problems, however, hamper the discussion so conceived. First, it is quite plausible that, in human forms of metacognition (as instantiated in speech production, metamemory, etc.), judgments of self-attribution redescribe elements of metacognitive experience. Metacognitive feelings might, on this view, represent subjective uncertainty and guide noetic decision-making, without needing to involve a conceptual interpretative process. What needs to be discussed, in order to establish the superiority of model 4, is whether or not subjects can rely on dedicated feelings alone to monitor their ongoing cognitive activity.

A second, related problem is that Carruthers' discussion conflates two domains of self-control, namely, the control of one's physical actions through perceptual feedback and the control of one's mental actions through metacognitive feedback (see sects. 5.1 and 9). Meta-action, however, is only functionally similar to metacognition when a metarepresentational reading is imposed on both, in spite of their different evolutionary profiles (Metcalfe & Greene Reference Metcalfe and Greene2007; Proust, in press a). If extracting, from a given task context, an evaluation of the mental resources available to complete the task were just another case of first-order action control, then one might agree that B-metacognition is nothing other than executive function. But metacognitive and executive abilities can be dissociated in schizophrenia (Koren et al. Reference Koren, Seidman, Goldsmith and Harvey2006). Mental action control is thus distinct both from executive memory as usually understood and from physical action control.

These methodological problems strongly bias the discussion against models 1 and 3. Here are three examples.

  1. 1. Our metacognitive interventions don't require introspection; they have no direct impact on cognitive processing (sect. 5.1).

    From a B-sense viewpoint, prediction and evaluation of one's mental states and events presuppose appreciating one's subjective uncertainty regarding correction, adequacy, and so on, of first-order decisions or judgments; this evaluation does not require that the target states are represented qua mental. For example, a child chooses to perform one memorization task rather than another by relying not on what she knows about memory, as the author claims, but on the feeling she has that one task is easier than another (Koriat Reference Koriat2000; Proust Reference Proust2007). The impact on decision is quite direct, and it is independent of mindreading.

  2. 2. A combination of first-order attitudes is sufficient to explain how animals select the uncertainty key in Smith et al.'s metaperceptual paradigm (sect. 5.2).

    If this is correct, how can monkeys rationally decide to opt out when no reinforcement of the uncertainty key is offered, and when, in addition, novel test stimuli are used? Why should there be transfer of the degree of belief associated with first-order items to novel tasks where these items are no longer included? A second rule must apply, as Carruthers (Reference Carruthers2008b) himself admits: having conflicting impulsions to act or not to act on a given stimulus, the subject becomes uncertain of its ability, say, to categorize. So the decision to act depends, after all, on subjective – not objective – features. Can these subjective features influence behavior only through their being metarepresented ? This is the crucial question that fails to be raised.

  3. 3. “Evidence suggests that if mindreading is damaged, then so too will be metacognition” (sect. 10, para. 10).

    Clinical research on autism and on schizophrenia suggests rather a dissociation of metacognitive and mindreading skills as predicted by model 1 (cf. Bacon et al. 2001; Farrant et al. Reference Farrant, Boucher and Blades1999; Koren et al. Reference Koren, Seidman, Goldsmith and Harvey2006). However, its relevance for the present discussion is downplayed as a smart behaviorist effect; introspection in patients with autism is rejected because it is not “metacognitive in the right sort of way” (sect. 10, para. 5). Negative results in meta-action from patients with autism are presented as evidence for impaired metacognition. Such appraisals implicitly appeal to the preferred metarepresentational interpretation of metacognition under discussion. Similarly, rejecting the relevance of metacognitive capacities which are “too weak to be of any interest” presupposes recognizing the superiority of model 4 over models 1 and 3.

A fair examination of the contribution of “introspection” in metacognition in models 1 and 3 would require studying the respective roles of control and monitoring in nonhuman and human epistemic decisions, in experimental tasks engaging metaperception, metamemory, and planning (Koriat et al. Reference Koriat, Ma'ayan and Nussinson2006; Proust, in press a). Focusing on self-directed mindreading can only bypass evolutionary considerations and obscure important functional differences.

ACKNOWLEDGMENTS

I am grateful to Dick Carter, Chris Viger, and Anna Loussouarn for their remarks and linguistic help. The preparation of this commentary was supported by the European Science Foundation EUROCORES Programme CNCC, with funds from CNRS and the EC Sixth Framework Programme under Contract no. ERAS-CT-2003–980409.

References

Bacon, E., Izaute, M. & Danion, J. M. (2007) Preserved memory monitoring but impaired memory control during episodic encoding in patients with schizophrenia. Journal of the International Neuropsychological Society 13:219–27.CrossRefGoogle ScholarPubMed
Carruthers, P. (2008b) Metacognition in animals: A skeptical look. Mind and Language 23(1):5889.CrossRefGoogle Scholar
Farrant, A., Boucher, J. & Blades, M. (1999) Metamemory in children with autism. Child Development 70:107–31.Google ScholarPubMed
Koren, D., Seidman, L. J., Goldsmith, M. & Harvey, P. D. (2006) Real-world cognitive – and metacognitive – dysfunction in schizophrenia: A new approach for measuring (and remediating) more “right stuff.” Schizophrenia Bulletin 32(2):310–26.CrossRefGoogle ScholarPubMed
Koriat, A. (2000) The feeling of knowing: Some metatheoretical implications for consciousness and control. Consciousness and Cognition 9:149–71.CrossRefGoogle ScholarPubMed
Koriat, A., Ma'ayan, H. & Nussinson, R. (2006) The intricate relationships between monitoring and control in metacognition: Lessons for the cause-and-effect relation between subjective experience and behavior. Journal of Experimental Psychology: General 135(1):3669.CrossRefGoogle ScholarPubMed
Metcalfe, J. & Greene, M. J. (2007) Metacognition of agency. Journal of Experimental Psychology General 136(2):184–99.CrossRefGoogle ScholarPubMed
Proust, J. (2007) Metacognition and metarepresentation: Is a self-directed theory of mind a precondition for metacognition? Synthese 2:271–95.CrossRefGoogle Scholar
Proust, J. (in press a) Epistemic agency and metacognition: An externalist view. Proceedings of the Aristotelian Society 108(Pt 3):241–68.CrossRefGoogle Scholar
Proust, J. (in press b) The representational basis of brute metacognition: A proposal. In: Philosophy of animal minds: New essays on animal thought and consciousness, ed. Lurz, R.. Cambridge University Press.Google Scholar