Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T16:05:30.527Z Has data issue: false hasContentIssue false

On the relation between counterfactual and causal reasoning

Published online by Cambridge University Press:  06 March 2008

Barbara A. Spellman
Affiliation:
Department of Psychology, University of Virginia, Charlottesville, VA 22904-4400. Spellman@virginia.eduDgn2 h@virginia.eduhttp://people.virginia.edu/~bas6g/
Dieynaba G. Ndiaye
Affiliation:
Department of Psychology, University of Virginia, Charlottesville, VA 22904-4400. Spellman@virginia.eduDgn2 h@virginia.eduhttp://people.virginia.edu/~bas6g/
Rights & Permissions [Opens in a new window]

Abstract

We critique the distinction Byrne makes between strong causes and enabling conditions, and its implications, on both theoretical and empirical grounds. First, we believe that the difference is psychological, not logical. Second, we disagree that there is a strict “dichotomy between the focus of counterfactual and causal thoughts.” Third, we disagree that it is easier for people to generate causes than counterfactuals.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2008

Psychologists studying the relation between counterfactual and causal reasoning have long asked: Why, despite their similarity, do people give different answers to counterfactual versus causal questions? (See Spellman & Mandel [Reference Spellman and Mandel1999] for history.) For example, when completing “if only …” statements about Mr. Jones who was hit by a drunk driver while taking an unusual route home, most people focus on the unusual route, yet they identify the drunk driver as the cause of the accident (Mandel & Lehman Reference Mandel and Lehman1996).

In the chapter “Causal Relations and Counterfactuals,” Byrne (Reference Byrne2005) argues that people provide different answers because they focus on different things: in counterfactual reasoning they focus on “enabling” conditions, whereas in causal reasoning they focus on “strong causes.” Imagine a dry forest floor and then a lightning strike resulting in a huge forest fire. People are likely to say, “if only there were not so many dry leaves,” and “the lightning caused the fire,” but not “the dry leaves caused the fire.” Byrne argues that strong causes (lightning) are consistent with two possibilities: (1) lightning and fire, and (2) no lightning and no fire – however, people mentally represent only the first possibility. Enabling conditions (dry leaves) are consistent with three possibilities: (1) dry leaves and fire, (2) no dry leaves and no fire, and (3) dry leaves and no fire – however, people mentally represent two possibilities (or only the first, but the second comes “readily”). People, Byrne argues, use those representations to distinguish causes from enablers and, as a result, answer counterfactual questions with enablers and causal questions with strong causes.

We have trouble with some of the assumptions and assumed consequences of that characterization on both theoretical and empirical grounds. First, we believe that the difference between enablers and causes is psychological, not logical. Second, we do not believe that there is a strict “dichotomy between the focus of counterfactual and causal thoughts” (Byrne Reference Byrne2005, p. 100). Third, Byrne argues that as a result of the difference in representation, it is easier for people to generate causes than counterfactuals; we disagree.

Enablers versus causes

At first the dried-leaves-and-lightning example seems obvious: of course dried leaves constitute an enabler, whereas lightning is a cause. But on deeper reflection the logic is not so clear. Dried leaves would not lead to a conflagration without lightning; however, neither would lightning without dried leaves. Their logical status is equivalent: each is necessary but neither is sufficient.

Similarly, consider a lightning-torn stretch of wetlands. Despite countless lightning strikes, there was never a fire until the year's masses of dry leaves blew in. Now it seems natural to argue that leaves caused the fire, whereas lightning was an enabler. Again, calling one a cause and one an enabler is a psychological, not a logical, judgment, and to explain differences in counterfactual and causal judgments by saying that people represent causes and enablers differently is to finesse the importance of various factors (e.g., context) that get people to treat logically equivalent events as psychologically different. (See Einhorn & Hogarth Reference Einhorn and Hogarth1986 and McGill Reference McGill1989, for other context effects.) It is unclear how the mental representation of possibilities accounts for such context effects and informs people about which is the cause and which is the enabler; it seems that people must already know which is which based on the context before they represent the events. Byrne does mention alternative information sources (covariation, mechanisms, abnormality), but her argument implies that the mental representation of possibilities provides a better account of how people distinguish strong causes from enablers.

Not quite a “dichotomy”

Second, it is inaccurate to characterize people's answers to causal and counterfactual questions as a strict “dichotomy.” In some studies, the most prevalent answers are the same (e.g., Wells & Gavanski Reference Wells and Gavanski1989, Experiment 1). Plus, differences in how counterfactual and causal reasoning are measured may contribute to belief in the dichotomy. Our participants read about a woman driving home from work. She stops at a red light and fiddles with the radio so that when the light turns green she hesitates before accelerating, delaying the cars behind her. Last in line is a school bus, which enters the intersection just as an irate man drives through the red light from the other direction hitting the bus and injuring many children.

Participants who listed counterfactuals focused on the hesitating woman; participants who rated causes focused on the irate man. These results replicate the “dichotomy.” However, there is a confound: researchers usually measure counterfactuals with listings but causes with ratings. What if both are measured with ratings? Other participants saw 12 story events previously listed by earlier participants and rated each on either whether they agreed the event was an “undoing counterfactual” or whether it was causal. The irate man was rated as both most causal and most changeable (Spellman & Ndiaye Reference Spellman and Ndiaye2007).

Thus, counterfactual and causal judgments are far from dichotomous; rather, depending on how questions are asked and answers are measured, they may focus on the same events.

Generating causes and counterfactuals

Byrne argues that because strong causes are represented by one possibility and enablers by two, and because “it is easier to think about one possibility than about several” (Byrne Reference Byrne2005, p. 119), it should be easier for people to generate causes than counterfactuals. McEleney and Byrne (Reference McEleney, Byrne, Garcia-Madruga, Carriedo and Gonzalez-Labra2000) had participants imagine they had moved to a new town to start a new job and read about various events that happened to them. When asked what they would have written in their diaries, participants spontaneously generated more causal than counterfactual thoughts. In contrast, our participants read about a man who had been abused by his father, joined the army, learned to use explosives, then blew up his fathers' company's warehouse. Participants listed fewer causes (M=5.7) than counterfactuals (M=7.7) (Spellman & Ndiaye Reference Spellman and Ndiaye2007). We have no problem distinguishing the studies – Byrne's answers were spontaneous, whereas ours were evoked; Byrne's story was about the participants themselves, whereas ours was about someone else – yet Byrne's models approach cannot account for the difference in results.

In summary, we believe that the present explanation of the differences between causal and counterfactual judgments suffers on both theoretical and empirical grounds. We prefer to think that both the similarities and differences between those judgments can be explained by the idea that counterfactuals provide input into causal judgments (Spellman et al. Reference Spellman, Kincannon, Stose, Mandel, Hilton and Catellani2005). But that argument is best left for another day.

References

Byrne, R. M. J. (2005) The rational imagination: How people create alternatives to reality. MIT Press.CrossRefGoogle Scholar
Einhorn, H. J. & Hogarth, R. M. (1986) Judging probable cause. Psychological Bulletin 99:319.Google Scholar
Mandel, D. R. & Lehman, D. R. (1996) Counterfactual thinking and ascriptions of cause and preventability. Journal of Personality and Social Psychology 71:450–63.Google Scholar
McEleney, A. & Byrne, R. (2000) Counterfactual thinking and causal explanation. In: Mental models in reasoning, ed. Garcia-Madruga, J. A., Carriedo, N. & Gonzalez-Labra, M. J., pp. 301–14. UNED (Universidad de Nacional de Education a Distancia).Google Scholar
McGill, A. L. (1989) Context effects in judgments of causation. Journal of Personality and Social Psychology 57:189200.Google Scholar
Spellman, B. A., Kincannon, A. & Stose, S. (2005) The relation between counterfactual and causal reasoning. In: The psychology of counterfactual thinking, ed. Mandel, D. R., Hilton, D. J. & Catellani, P., pp. 2843. Routledge Research.Google Scholar
Spellman, B. A. & Mandel, D. R. (1999) When possibility informs reality: Counterfactual thinking as a cue to causality. Current Directions in Psychological Science 8:120–23.Google Scholar
Spellman, B. A. & Ndiaye, D. G. (2007) The (dis)similarity between counterfactual and causal judgments: The importance of underlying information, availability, and measurement. Unpublished manuscript.Google Scholar
Wells, G. L. & Gavanski, I. (1989) Mental simulation of causality. Journal of Personality and Social Psychology 56:161–69.CrossRefGoogle Scholar