Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-06T16:02:31.707Z Has data issue: false hasContentIssue false

What we imagine versus how we imagine, and a problem for explaining counterfactual thoughts with causal ones

Published online by Cambridge University Press:  06 March 2008

Winston Chang and Patricia Herrmann
Affiliation:
Department of Psychology, Northwestern University, Evanston, IL 60208. winston-chang@northwestern.edup-herrmann@northwestern.eduhttp://www.wcas.northwestern.edu/psych/
Rights & Permissions [Opens in a new window]

Abstract

Causal and counterfactual thoughts are bound together in Byrne's theory of human imagination. We think there are two issues in her theory that deserve clarification. First, Byrne describes which counterfactual possibilities we think of, but she leaves unexplained the mechanisms by which we generate these possibilities. Second, her exploration of “strong causes” and enablers gives two different predictions of which counterfactuals we think of in causal scenarios. On one account, we think of the counterfactuals which we have control over. On the other, which counterfactuals we think of depends on whether something is a strong cause or an enabler. Although these two accounts sometimes give the same predictions, we present cases in which they differ, and we would like to see Byrne's theory provide a way of reconciling these differences.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2008

We offer two criticisms of Ruth Byrne's treatment of causal and counterfactual thinking in The Rational Imagination (Byrne Reference Byrne2005). The first is that she does not explain how we mentally generate some possibilities and avoid others. The second is that there are contradictory explanations in her discussion of “strong causes” and enablers.

Before diving into the discussion, it will be helpful to lay out some of Byrne's terminology. True possibilities are those which are consistent with a set of premises; generally, the premises are a person's beliefs about the world. Therefore, when speaking of future events, a true possibility is one that could happen, and a false possibility is one that could not. When speaking of past events, a true possibility is one that actually happened, and a false one is one that did not. A counterfactual possibility is one that once was true but is now false.

For the purpose of this discussion, we draw another distinction, which we will call correct and incorrect possibilities. Correct possibilities are those that are true or were true in the past (this includes counterfactuals). Incorrect possibilities are those that were never true. According to Byrne's theory, people think of correct possibilities – the true and the counterfactual – but we do not think of incorrect possibilities.

The first issue is that Byrne provides no explanation for how we generate correct possibilities while avoiding incorrect ones. Simple cases where subjects are given if–then statements can be handled by an algorithm that generates three of the four possibilities. In most cases, however, the problem is more complicated: we infer possibilities from our understanding of how the world works. In Chapter 5, Byrne describes counterfactual thoughts expressed in the media in the aftermath of the September 11, 2001, attacks. Many start like this: “If only the hijackers had been prevented from getting on board …” Here is one that people tend not to think of: “If only the Al-Qaeda network did not exist …” Byrne describes which counterfactuals we think of, but she does not explain how we connect these counterfactual antecedents to the consequent: “… then the attacks would not have occurred.” Nor does she explain how we avoid incorrect counterfactual antecedents such as, “If only there were more police at the World Trade Center …” or, “If only there were purple elephants …”

Byrne catalogs which counterfactual thoughts we have and describes a general structure to them, but does not explain how we generate them. This is analogous to a distinction made in biology: cataloging features of animal species (some finches have long, narrow beaks and others have short, stout beaks) versus explaining how those features came about (natural selection). The former is important, but the latter is the theoretical foundation – and it is missing here. What are the mechanisms underlying counterfactual thought? An example answer in the causal domain is that we use Bayes' nets to generate counterfactuals (Pearl Reference Pearl2000). (Bayes' nets, however, would not make the same predictions as Byrne's theory without a lot of extra machinery.) It might be too much to expect Byrne to commit to a particular theory of underlying mechanism, but we would have liked this book to give some hints here. After all, on page 1, she states, “This book is about how people imagine alternatives to reality,” not just which alternatives they imagine.

The second issue concerns Byrne's discussion of “strong causes” and enablers. Imagine that you take a new route on your drive home, and on your way a careless driver swerves into your path, resulting in a crash. People tend to think, “If only I had driven home by a different route ….” Byrne explains this as follows:

People mentally represent the strong causal relation by thinking initially about just one possibility, the co-occurrence of the cause and its outcome, whereas they mentally represent the enabling relation by thinking about the enabler and its outcome, and they can readily think about the absence of both. Accordingly most people focus on enablers (and disablers) in their thoughts about what might have been because the second possibility provides a ready-made counterfactual alternative. (Byrne Reference Byrne2005, pp. 118–19)

In the next paragraph, she writes:

Causes occur – lightning strikes, careless drivers swerve, and terrorists formulate campaigns of destruction – and wishing the cause did not occur may be a remote and implausible hope. But enablers can be mentally deleted in an imagined alternative: dry leaves removed, alternative routes home taken, airport security improved. Wishing that whatever could have been done had been done to prevent the outcome or promote a better one may be a plausible alternative. (p. 119)

These two paragraphs offer two different explanations:

  1. 1 Strong causes seem immutable because we think of only one possibility, while enablers seem mutable because we think of two possibilities. (First paragraph quoted above.)

  2. 2 We think of alternatives when we have control over them. (Second paragraph quoted above.)

These explanations happen to agree in Byrne's examples, but there are many cases where they pull apart. Byrne presumes that we generally cannot control strong causes whereas we can control enablers, but this doesn't seem right. If it were, then we would never view our actions as strong causes when other possibilities are readily available; for example, putting sandals on your feet instead of shoes would merely enable the sandals to end up there. Byrne could take a hard line and say that all controllable actions are mere enablers, but this contradicts normal language; it would transform her theory into a normative one with respect to causes and enablers.

What happens if the strong cause is within one's control and the enabler is not? Imagine that you drive drunk and crash into someone who is taking a new route home. Here the strong cause is your driving drunk, and the enabler is the person taking the new route home. According to Byrne's first explanation, you would not think of an alternative to the strong cause, but you would think of an alternative to the enabler, so you would think, “If only he hadn't taken a new route home.” But according to the second explanation, you would think of alternatives that you have control over, so you would think, “If only I hadn't driven drunk.” Which explanation is correct? In the stories Byrne uses, the two explanations happen to make the same predictions, so we can't tell. We would like to know what happens when the explanations disagree, as they do here. We suspect that, for counterfactual thoughts, controllability is the more important factor.

References

Byrne, R. M. J. (2005) The rational imagination: How people create alternatives to reality. MIT Press.CrossRefGoogle Scholar
Pearl, J. (2000) Causality: Models, reasoning, and inference. Cambridge University Press.Google Scholar