Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-11T11:09:11.535Z Has data issue: false hasContentIssue false

Counterfactuals in science and engineering

Published online by Cambridge University Press:  06 March 2008

Sanjay Chandrasekharan
Affiliation:
School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280. sanjayan@cc.gatech.edunancyn@cc.gatech.eduhttp://www-static.cc.gatech.edu/~sanjayan/http://www.cc.gatech.edu/~nersessian/
Nancy J. Nersessian
Affiliation:
School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280. sanjayan@cc.gatech.edunancyn@cc.gatech.eduhttp://www-static.cc.gatech.edu/~sanjayan/http://www.cc.gatech.edu/~nersessian/
Rights & Permissions [Opens in a new window]

Abstract

The notion of mutation is applicable to the generation of novel designs and solutions in engineering and science. This suggests that engineers and scientists have to work against the biases identified in counterfactual thinking. Therefore, imagination appears a lot less rational than claimed in the target article.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2008

Our research focus is the generation of novel solutions and discoveries in engineering and science. The generation of counterfactual scenarios is central to these areas, and we analyze this process as involving the simulation of mental models, often in conjunction with built models (simulative model-based reasoning) (see Nersessian Reference Nersessian, Carruthers, Stich and Siegal2002; in press). Kahneman and Tversky (Reference Kahneman, Tversky, Kahneman, Slovic and Tversky1982) proposed simulation as the mechanism underlying counterfactual thinking, but Byrne's account in The Rational Imagination (Byrne Reference Byrne2005) does not elaborate on this aspect. It remains unclear how simulation in counterfactual thinking and simulation in science and engineering are related.

One possible way simulation in the two domains could be related is via the processing of logical implication, which involves imagining counterfactual scenarios. This use of counterfactuals in logical processing seems to show that (even) logic needs imagination, rather than that imagination is rational. The latter claim appears justified only because Byrne highly constrains her definition of counterfactual thinking, staying close to sentence-level processing. This definition covers a much more narrow space than the generation of creative problem solutions in engineering and science. Nevertheless, the notion of mutating a factual scenario seems to be applicable to both domains, so it may be fruitful to raise the following question: Do the factors that influence mutation in everyday situations also influence mutations in engineering and science? Our cautious answer is that they do, and this complicates the notion of a rational imagination.

Consider the following design problem: How can a cell phone understand context? Essentially, the phone should shift to vibration mode, or forward calls to voice mail, when the user enters places like libraries and classrooms. The phone should also block calls when the user is driving, but should allow calls if she is a passenger. A much-discussed solution uses the Global Positioning System (GPS) to discover the coordinates of the cell phone, but it faces the thorny problem of inferring context from coordinates. A simple solution would be adding small policy-announcing devices, installed by buildings and by carmakers/owners, which “instruct” cell phones to shut up. We wondered why such devices don't exist, even though many cities have introduced fines for using cell phones while driving, and some charge heavy fines for phones ringing in opera halls. Note that such spaces usually have announcing devices for humans; for instance, big signs saying “Do not use cell phone.” There are other such design problems where similar environment-based solutions have been ignored. We hypothesized that this is because adding epistemic structures (labels, color codes, shelf talkers, etc.) to the world is a readily available design strategy for humans, but it is less available for artifacts. To test this hypothesis, we developed problem scenarios involving humans and artifacts (cell phones, robots), where participants were asked to propose solutions to a design problem. We used two groups of student participants, one general and the other specialist (master's level engineering). Both groups achieved the same level of performance, proposing environment-based solutions for problems involving humans, but artifact-based solutions for the cell phones and robots (see Chandrasekharan Reference Chandrasekharan2005).

Based on counterfactual thinking research, this bias in mutating the environment could be because of two related reasons. One, participants perceive the environment as more controllable in the case of human problems, and artifacts as more controllable in the scenarios that focus on these. Second, there could be an actor/observer difference (Kahneman & Miller Reference Kahneman and Miller1986), where, in the case of human problems, the participants take an actor perspective and “simulate” the humans, and this leads to treating the environment as controllable. For artifacts, the participants take an observer perspective, and this leads to a fixation on the artifacts and on their possible mutations. To test this hypothesis, we gave the artifact scenarios to another group of participants and asked them to think of themselves as cell phones/robots (simulate the artifacts) while solving the problems. The number of environment-based solutions increased significantly in this case.

This seems to suggest that the biases in everyday counterfactual thinking also operate in design thinking, such as a preference for changing controllable events and actions. A similar bias could exist in the case of science. For instance, think of a biochemist trying to block the expression of a complex protein. Will she prefer to manipulate the actions she observes (e.g., use an antagonist to block a binding), or the inactions (e.g., use an agonist to activate another action in the cell), or a third option that involves neither or a combination of these choices? In theoretical research, are foundational assumptions (such as the currently questioned Weismann barrier in genetics) treated as immutable, because they are perceived as anchors, or as similar to forbidden possibilities? Closer to our research, do clinical researchers preferentially generate pharmacological solutions, rather than bio-medical engineering solutions, because the former are more available? How can the latter be made more available? Such questions have not been raised in science and engineering, even though these fields deal with counterfactual scenarios on an everyday basis. We believe applying the insights from counterfactual thinking to these areas would prove valuable.

A more general question raised by this line of inquiry is whether mutation is a general process or a specialist one. Chapter 8 of the book, and the claim that logical implications are processed using counterfactuals, seem to suggest it is general. This raises the possibility that all counterfactual scenarios, including non-rational ones such as hallucinations, are generated using mutation. So the biases underlying everyday counterfactuals would be involved in these as well. This would mean that imagination is not rational as claimed, but rather, that it is a general mechanism, similar to, say, recursion, which is used in all situations. Further, this general status of mutation raises the possibility that in science and engineering, good solutions arise because scientists and engineers have developed ways of overcoming these biases (such as building simulations to explore the parameter space exhaustively, or using biological design as inspiration for engineering design). In other words, science and engineering, which stand right beside deductive logic as paragons of rationality, have to work against the biases in counterfactual thinking. This would make imagination still less rational.

References

Byrne, R. M. J. (2005) The rational imagination: How people create alternatives to reality. MIT Press.CrossRefGoogle Scholar
Chandrasekharan, S. (2005) Epistemic structure: An inquiry into how agents change the world for cognitive congeniality. Unpublished doctoral dissertation, Carleton University, Ottawa, Canada. Available as Carleton University Cognitive Science Technical Report, at: http://www.carleton.ca/iis/TechReports/files/2005-02.pdf.Google Scholar
Kahneman, D. & Miller, D. (1986) Norm theory: Comparing reality to its alternatives. Psychological Review 93:136–53.Google Scholar
Kahneman, D. & Tversky, A. (1982) The simulation heuristic. In: Judgment under uncertainty: heuristics and biases, ed. Kahneman, D., Slovic, P. & Tversky, A., pp. 201208. Cambridge University Press.CrossRefGoogle Scholar
Nersessian, N. J. (2002) The cognitive basis of model-based reasoning in science. In: The cognitive basis of science, ed. Carruthers, P., Stich, S. & Siegal, M.. Cambridge University Press.Google Scholar
Nersessian, N. J. (in press) Creating scientific concepts. MIT Press.CrossRefGoogle Scholar