Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T10:01:09.086Z Has data issue: false hasContentIssue false

Understanding, evaluating, and producing arguments: Training is necessary for reasoning skills

Published online by Cambridge University Press:  29 March 2011

Maralee Harrell
Affiliation:
Department of Philosophy, Carnegie Mellon University, Pittsburgh, PA 15213. mharrell@cmu.eduhttp://www.hss.cmu.edu/philosophy/faculty-harrell.php

Abstract

This commentary suggests that the general population has much less reasoning skill than is claimed by Mercier & Sperber (M&S). In particular, many studies suggest that the skills of understanding, evaluating, and producing arguments are generally poor in the population of people who have not had specific training.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2011

The target article by Mercier & Sperber (M&S) offers several arguments for their Reasoning is Argumentation hypothesis–that the primary function of reasoning in human beings is to evaluate and produce arguments intended to persuade. While I believe that the Reasoning is Argumentation hypothesis is interesting and should be explored, my comments focus on one specific claim M&S make.

To show that the predictions of their hypothesis are borne out, M&S point to multiple psychological studies that purport to demonstrate that people are generally able to reason well. In this context, reasoning well consists in being able to understand, evaluate, and produce arguments. In particular, M&S claim that studies show that (1) people are good at evaluating both subarguments and overall arguments, and (2) people can generally produce good arguments in a debatelike setting.

In fact, the experimental evidence from a variety of studies, including surprisingly many that are cited favorably by M&S, suggests that people do not have these particular skills. One general challenge in extracting broader lessons from experimental data is that the skills of understanding, evaluating, and producing arguments are vaguely defined in the literature in general, and the target article is no exception. There is a crucial distinction between argument content and argument structure that is ignored, and some studies focus solely on argument content, while others focus on argument structure. The extent to which either kind of study supports claims about participants' ability to reason well depends on this distinction in an important way.

The definition of an argument given by M&S is standard: A set of statements, one of which is the conclusion, which is supposed to be epistemically supported by the other statements, called the premises. The content of an argument refers to the propositions that are expressed by the premises and conclusion, whereas the structure of the argument refers to the way the premises work together to support the conclusion. Successfully understanding an argument consists in being able to identify both the content and the structure of the argument: the conclusion, the premises, and the particular way the premises support the conclusion (e.g., whether the premises are linked or convergent). Successfully evaluating an argument consists in being able to assess the content (i.e., determine whether the premises are true) and the structure (i.e., determine whether, assuming that they are true, the premises actually do support the conclusion). Finally, successfully constructing an argument consists in being able to supply true premises and specify how those premises work together to support the conclusion. Although structure and content are both relevant for all three activities, they are relevant in different ways, and so great care is required (but not always taken) in designing experimental tasks that appropriately test them.

Problematic empirical evidence arises for all three: argument understanding, argument evaluation, and argument production. For the first process, there actually seems to be scant research in the area of argument understanding. The little research that does exist in this area is mixed. Some studies (e.g., Ricco Reference Ricco2003, cited by M&S) suggest that for simple arguments, adults can, when prompted, differentiate between linked and convergent arguments. Other studies, however, suggest that, even for simple arguments, untrained college students can identify the conclusion but without prompting are poor at both identifying the premises and how the premises support the conclusion (Harrell Reference Harrell2006; Reference Harrell2008; Reference Harrell2011).

Second, argument evaluation is usually loosely, and only implicitly, defined as being able either to identify reasoning fallacies or to differentiate reasonable arguments from unreasonable ones. The research on argument evaluation seems mixed, at best. In particular, a number of systematic biases have been found. When witnessing an argument from the outside, participants' judgment of the burden of proof depends on who speaks first (Bailenson & Rips Reference Bailenson and Rips1996, cited by M&S), and participants routinely mistake innocuous repetition for circularity (Rips Reference Rips2002, cited by M&S). When participating in an argument themselves, participants tend to reason less well than when witnessing an argument (Neuman et al. Reference Neuman, Weinstock and Glasner2006; Thompson et al. Reference Thompson, Evans and Handley2005b; both cited by M&S).

Finally, in many of these studies, the perception by the researchers that participants were able to “build complex arguments” (sect. 2.2, para. 3) is vague or ambiguous. Producing an argument is importantly different from, for example, mere fact gathering, but the research focuses almost exclusively on nothing more complex than the listing of reasons to believe. Even for this simple kind of argument production, studies suggest that both low- and high-cognitive-ability participants have difficulty producing evidence for a claim (Sá et al. Reference Sá, Kelley, Ho and Stanovich2005, cited by M&S).

Contrary to the claims by M&S, a wide literature supports the contention that the particular skills of understanding, evaluating, and producing arguments are generally poor in the population of people who have not had specific training and that specific training is what improves these skills. Some studies, for example, show that students perform significantly better on reasoning tasks only when they have learned to identify premises and conclusions (Shaw Reference Shaw1996, cited by M&S) or have learned some standard argumentation norms (Weinstock et al. Reference Weinstock, Neuman and Tabak2004, cited by M&S). M&S may be correct that some of these negative results arise because the stakes are too low, but many studies that show improvements from specific training occur in high-stakes environments like a college course (Harrell Reference Harrell2011; Twardy Reference Twardy2004; van Gelder Reference Van Gelder2005; van Gelder et al. Reference Van Gelder, Bissett and Cumming2004). This suggests that difficulty with understanding, evaluating, and producing arguments may be a deeper feature of our cognition.

References

Bailenson, J. N. & Rips, L. J. (1996) Informal reasoning and burden of proof. Applied Cognitive Psychology 10(7):S316.3.0.CO;2-7>CrossRefGoogle Scholar
Harrell, M. (2006) Diagrams that really are worth ten thousand words: Using argument diagrams to teach critical thinking skills. In: Proceedings of the 28th Annual Conference of the Cognitive Science Society, p. 2501. Erlbaum.Google Scholar
Harrell, M. (2008) No computer program required: Even pencil-and-paper argument mapping improves critical thinking skills. Teaching Philosophy 31:351–74.CrossRefGoogle Scholar
Harrell, M. (2011) Argument diagramming and critical thinking in introductory philosophy. Higher Education Research and Development 30(3):371–85.CrossRefGoogle Scholar
Neuman, Y., Weinstock, M. P. & Glasner, A. (2006) The effect of contextual factors on the judgment of informal reasoning fallacies. Quarterly Journal of Experimental Psychology, Section A: Human Experimental Psychology 59:411–25.CrossRefGoogle ScholarPubMed
Ricco, R. B. (2003) The macrostructure of informal arguments: A proposed model and analysis. Quarterly Journal of Experimental Psychology, Section A: Human Experimental Psychology 56(6):1021–51.CrossRefGoogle ScholarPubMed
Rips, L. J. (2002) Circular reasoning. Cognitive Science 26(6):767–95.CrossRefGoogle Scholar
, W. C., Kelley, C. N., Ho, C. & Stanovich, K. E. (2005) Thinking about personal theories: Individual differences in the coordination of theory and evidence. Personality and Individual Differences 38(5):1149–61.CrossRefGoogle Scholar
Shaw, V. F. (1996) The cognitive processes in informal reasoning. Thinking & Reasoning 2:5180.CrossRefGoogle Scholar
Thompson, V. A., Evans, J. St. B. T. & Handley, S. J. (2005b) Persuading and dissuading by conditional argument. Journal of Memory and Language 53(2):238–57.CrossRefGoogle Scholar
Twardy, C. R. (2004) Argument maps improve critical thinking. Teaching Philosophy 27:95116.CrossRefGoogle Scholar
Van Gelder, T. (2005) Teaching critical thinking: Some lessons from cognitive science. College Teaching 53:4146.CrossRefGoogle Scholar
Van Gelder, T, Bissett, M. & Cumming, G. (2004) Cultivating expertise in informal reasoning. Canadian Journal of Experimental Psychology 58:142–52.CrossRefGoogle ScholarPubMed
Weinstock, M., Neuman, Y. & Tabak, I. (2004) Missing the point or missing the norms? Epistemological norms as predictors of students' ability to identify fallacious arguments. Contemporary Educational Psychology 29(1):7794.CrossRefGoogle Scholar