Psychologists studying the relation between counterfactual and causal reasoning have long asked: Why, despite their similarity, do people give different answers to counterfactual versus causal questions? (See Spellman & Mandel [Reference Spellman and Mandel1999] for history.) For example, when completing “if only …” statements about Mr. Jones who was hit by a drunk driver while taking an unusual route home, most people focus on the unusual route, yet they identify the drunk driver as the cause of the accident (Mandel & Lehman Reference Mandel and Lehman1996).
In the chapter “Causal Relations and Counterfactuals,” Byrne (Reference Byrne2005) argues that people provide different answers because they focus on different things: in counterfactual reasoning they focus on “enabling” conditions, whereas in causal reasoning they focus on “strong causes.” Imagine a dry forest floor and then a lightning strike resulting in a huge forest fire. People are likely to say, “if only there were not so many dry leaves,” and “the lightning caused the fire,” but not “the dry leaves caused the fire.” Byrne argues that strong causes (lightning) are consistent with two possibilities: (1) lightning and fire, and (2) no lightning and no fire – however, people mentally represent only the first possibility. Enabling conditions (dry leaves) are consistent with three possibilities: (1) dry leaves and fire, (2) no dry leaves and no fire, and (3) dry leaves and no fire – however, people mentally represent two possibilities (or only the first, but the second comes “readily”). People, Byrne argues, use those representations to distinguish causes from enablers and, as a result, answer counterfactual questions with enablers and causal questions with strong causes.
We have trouble with some of the assumptions and assumed consequences of that characterization on both theoretical and empirical grounds. First, we believe that the difference between enablers and causes is psychological, not logical. Second, we do not believe that there is a strict “dichotomy between the focus of counterfactual and causal thoughts” (Byrne Reference Byrne2005, p. 100). Third, Byrne argues that as a result of the difference in representation, it is easier for people to generate causes than counterfactuals; we disagree.
Enablers versus causes
At first the dried-leaves-and-lightning example seems obvious: of course dried leaves constitute an enabler, whereas lightning is a cause. But on deeper reflection the logic is not so clear. Dried leaves would not lead to a conflagration without lightning; however, neither would lightning without dried leaves. Their logical status is equivalent: each is necessary but neither is sufficient.
Similarly, consider a lightning-torn stretch of wetlands. Despite countless lightning strikes, there was never a fire until the year's masses of dry leaves blew in. Now it seems natural to argue that leaves caused the fire, whereas lightning was an enabler. Again, calling one a cause and one an enabler is a psychological, not a logical, judgment, and to explain differences in counterfactual and causal judgments by saying that people represent causes and enablers differently is to finesse the importance of various factors (e.g., context) that get people to treat logically equivalent events as psychologically different. (See Einhorn & Hogarth Reference Einhorn and Hogarth1986 and McGill Reference McGill1989, for other context effects.) It is unclear how the mental representation of possibilities accounts for such context effects and informs people about which is the cause and which is the enabler; it seems that people must already know which is which based on the context before they represent the events. Byrne does mention alternative information sources (covariation, mechanisms, abnormality), but her argument implies that the mental representation of possibilities provides a better account of how people distinguish strong causes from enablers.
Not quite a “dichotomy”
Second, it is inaccurate to characterize people's answers to causal and counterfactual questions as a strict “dichotomy.” In some studies, the most prevalent answers are the same (e.g., Wells & Gavanski Reference Wells and Gavanski1989, Experiment 1). Plus, differences in how counterfactual and causal reasoning are measured may contribute to belief in the dichotomy. Our participants read about a woman driving home from work. She stops at a red light and fiddles with the radio so that when the light turns green she hesitates before accelerating, delaying the cars behind her. Last in line is a school bus, which enters the intersection just as an irate man drives through the red light from the other direction hitting the bus and injuring many children.
Participants who listed counterfactuals focused on the hesitating woman; participants who rated causes focused on the irate man. These results replicate the “dichotomy.” However, there is a confound: researchers usually measure counterfactuals with listings but causes with ratings. What if both are measured with ratings? Other participants saw 12 story events previously listed by earlier participants and rated each on either whether they agreed the event was an “undoing counterfactual” or whether it was causal. The irate man was rated as both most causal and most changeable (Spellman & Ndiaye Reference Spellman and Ndiaye2007).
Thus, counterfactual and causal judgments are far from dichotomous; rather, depending on how questions are asked and answers are measured, they may focus on the same events.
Generating causes and counterfactuals
Byrne argues that because strong causes are represented by one possibility and enablers by two, and because “it is easier to think about one possibility than about several” (Byrne Reference Byrne2005, p. 119), it should be easier for people to generate causes than counterfactuals. McEleney and Byrne (Reference McEleney, Byrne, Garcia-Madruga, Carriedo and Gonzalez-Labra2000) had participants imagine they had moved to a new town to start a new job and read about various events that happened to them. When asked what they would have written in their diaries, participants spontaneously generated more causal than counterfactual thoughts. In contrast, our participants read about a man who had been abused by his father, joined the army, learned to use explosives, then blew up his fathers' company's warehouse. Participants listed fewer causes (M=5.7) than counterfactuals (M=7.7) (Spellman & Ndiaye Reference Spellman and Ndiaye2007). We have no problem distinguishing the studies – Byrne's answers were spontaneous, whereas ours were evoked; Byrne's story was about the participants themselves, whereas ours was about someone else – yet Byrne's models approach cannot account for the difference in results.
In summary, we believe that the present explanation of the differences between causal and counterfactual judgments suffers on both theoretical and empirical grounds. We prefer to think that both the similarities and differences between those judgments can be explained by the idea that counterfactuals provide input into causal judgments (Spellman et al. Reference Spellman, Kincannon, Stose, Mandel, Hilton and Catellani2005). But that argument is best left for another day.