Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-12T00:01:07.494Z Has data issue: false hasContentIssue false

Will human-like machines make human-like mistakes?

Published online by Cambridge University Press:  10 November 2017

Evan J. Livesey
Affiliation:
School of Psychology, The University of Sydney, NSW 2006, Australia. evan.livesey@sydney.edu.aumicah.goldwater@sydney.edu.auben.colagiuri@sydney.edu.auhttp://sydney.edu.au/science/people/evan.livesey.phphttp://sydney.edu.au/science/people/micah.goldwater.phphttp://sydney.edu.au/science/people/ben.colagiuri.php
Micah B. Goldwater
Affiliation:
School of Psychology, The University of Sydney, NSW 2006, Australia. evan.livesey@sydney.edu.aumicah.goldwater@sydney.edu.auben.colagiuri@sydney.edu.auhttp://sydney.edu.au/science/people/evan.livesey.phphttp://sydney.edu.au/science/people/micah.goldwater.phphttp://sydney.edu.au/science/people/ben.colagiuri.php
Ben Colagiuri
Affiliation:
School of Psychology, The University of Sydney, NSW 2006, Australia. evan.livesey@sydney.edu.aumicah.goldwater@sydney.edu.auben.colagiuri@sydney.edu.auhttp://sydney.edu.au/science/people/evan.livesey.phphttp://sydney.edu.au/science/people/micah.goldwater.phphttp://sydney.edu.au/science/people/ben.colagiuri.php

Abstract

Although we agree with Lake et al.'s central argument, there are numerous flaws in the way people use causal models. Our models are often incorrect, resistant to correction, and applied inappropriately to new situations. These deficiencies are pervasive and have real-world consequences. Developers of machines with similar capacities should proceed with caution.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Lake et al. present a compelling case for why causal model-building is a key component of human learning, and we agree that beliefs about causal relations need to be captured by any convincingly human-like approach to artificial intelligence (AI). Knowledge of physical relations between objects and psychological relations between agents brings huge advantages. It provides a wealth of transferable information that allows humans to quickly apprehend a new situation. As such, combining the computational power of deep-neural networks with model-building capacities could indeed bring solutions to some of the world's most pressing problems. However, as advantageous as causal model-building might be, it also brings problems that can lead to flawed learning and reasoning. We therefore ask, would making machines “human-like” in their development of causal models also make those systems flawed in human-like ways?

Applying a causal model, especially one based on intuitive understanding, is essentially a gamble. Even though we often feel like we understand the physical and psychological relations surrounding us, our causal knowledge is almost always incomplete and sometimes completely wrong (Rozenblit & Keil Reference Rozenblit and Keil2002). These errors may be an inevitable part of the learning process by which models are updated based on experience. However, there are many examples in which incorrect causal models persist, despite strong counterevidence. Take the supposed link between immunisation and autism. Despite the science and the author of the original vaccine-autism connection being widely and publicly discredited, many continue to believe that immunisation increases the risk of autism and their refusal to immunise has decreased the population's immunity to preventable diseases (Larson et al. Reference Larson, Cooper, Eskola, Katz and Ratzan2011; Silverman & Hendrix Reference Silverman and Hendrix2015).

Failures to revise false causal models are far from rare. In fact, they seem to be an inherent part of human reasoning. Lewandowsky and colleagues (Reference Lewandowsky, Ecker, Seifert, Schwarz and Cook2012) identify numerous factors that increase resistance to belief revision, including several that are societal-level (e.g., biased exposure to information) or motivational (e.g., vested interest in retaining a false belief). Notwithstanding the significance of these factors (machines too can be influenced by biases in data availability and the motives of their human developers), it is noteworthy that people still show resistance to updating their beliefs even when these sources of bias are removed, especially when new information conflicts with the existing causal model (Taylor & Ahn Reference Taylor and Ahn2012).

Flawed causal models can also be based on confusions that are less easily traced to specific falsehoods. Well-educated adults regularly confuse basic ontological categories (Chi et al. Reference Chi, Slotta and De Leeuw1994), distinctions between mental, biological, and physical phenomena that are fundamental to our models of the world and typically acquired in childhood (Carey Reference Carey2011). A common example is the belief that physical energy possesses psychological desires and intentions – a belief that even some physics students appear to endorse (Svedholm & Lindeman Reference Svedholm and Lindeman2013). These errors affect both our causal beliefs and our choices. Ontological confusions have been linked to people's acceptance of alternative medicine, potentially leading an individual to choose an ineffective treatment over evidence-based treatments, sometimes at extreme personal risk (Lindeman Reference Lindeman2011).

Causal models, especially those that affect beliefs about treatment efficacy, can even influence physiological responses to medical treatments. In this case, known as the placebo effect, beliefs regarding a treatment can modulate the treatment response, positively or negatively, independently of whether a genuine treatment is delivered (Colagiuri et al. Reference Colagiuri, Schenk, Kessler, Dorsey and Colloca2015). The placebo effect is caused by a combination of expectations driven by causal beliefs and associative learning mechanisms that are more analogous to the operations of simple neural networks. Associative learning algorithms, of the kind often used in neural networks, are surprisingly susceptible to illusory correlations, for example, when a treatment actually has no effect on a medical outcome (Matute et al. Reference Matute, Blanco, Yarritu, Díaz-Lago, Vadillo and Barberia2015). Successfully integrating two different mechanisms for knowledge generation (neural networks and causal models), when each individually may be prone to bias, is an interesting problem, not unlike the challenge of understanding the nature of human learning. Higher-level beliefs interact in numerous ways with basic learning and memory mechanisms, and the precise nature and consequences of these interactions remain unknown (Thorwart & Livesey Reference Thorwart and Livesey2016).

Even when humans hold an appropriate causal model, they often fail to use it. When facing a new problem, humans often erroneously draw upon models that share superficial properties with the current problem, rather than those that share key structural relations (Gick & Holyoak Reference Gick and Holyoak1980). Even professional management consultants, whose job it is to use their prior experiences to help businesses solve novel problems, often fail to retrieve the most relevant prior experience to the new problem (Gentner et al. Reference Gentner, Loewenstein, Thompson and Forbus2009). It is unclear whether an artificial system that possesses mental modelling capabilities would suffer the same limitations. On the one hand, they may be caused by human processing limitations. For example, effective model-based decision-making is associated with capacities for learning and transferring abstract rules (Don et al. Reference Don, Goldwater, Otto and Livesey2016), and for cognitive control (Otto et al. Reference Otto, Skatova, Madlon-Kay and Daw2015), which may potentially be far more powerful in future AI systems. On the other hand, the power of neural networks lies precisely in their ability to encode rich featural and contextual information. Given that experience with particular causal relations is likely to correlate with experience of more superficial features, a more powerful AI model generator may still suffer similar problems when faced with the difficult decision of which model to apply to a new situation.

Would human-like AI suffer human-like flaws, whereby recalcitrant causal models lead to persistence with poor solutions, or novel problems activate inappropriate causal models? Developers of AI systems should proceed with caution, as these properties of human causal modelling produce pervasive biases, and may be symptomatic of the use of mental models rather than the limitations on human cognition. Monitoring the degree to which AI systems show the same flaws as humans will be invaluable for shedding light on why human cognition is the way it is and, it is hoped, will offer some solutions to help us change our minds when we desperately need to.

References

Carey, S. (2011) The origin of concepts: A précis. Behavioral and Brain Sciences 34(03):113–62.CrossRefGoogle ScholarPubMed
Chi, M. T., Slotta, J. D. & De Leeuw, N. (1994) From things to processes: A theory of conceptual change for learning science concepts. Learning and Instruction 4(1):2743.CrossRefGoogle Scholar
Colagiuri, B., Schenk, L. A., Kessler, M. D., Dorsey, S. G. & Colloca, L. (2015) The placebo effect: from concepts to genes. Neuroscience 307:171–90.CrossRefGoogle ScholarPubMed
Don, H. J., Goldwater, M. B., Otto, A. R. & Livesey, E. J. (2016) Rule abstraction, model-based choice, and cognitive reflection. Psychonomic Bulletin & Review 23(5):1615–23.CrossRefGoogle ScholarPubMed
Gentner, D., Loewenstein, J., Thompson, L. & Forbus, K. D. (2009) Reviving inert knowledge: Analogical abstraction supports relational retrieval of past events. Cognitive Science 33(8):1343–82.CrossRefGoogle ScholarPubMed
Gick, M. L. & Holyoak, K. J. (1980) Analogical problem solving. Cognitive Psychology 12(3):306–55.Google Scholar
Larson, H. J., Cooper, L. Z., Eskola, J., Katz, S. L. & Ratzan, S. (2011) Addressing the vaccine confidence gap. The Lancet 378(9790):526–35.CrossRefGoogle ScholarPubMed
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N. & Cook, J. (2012) Misinformation and its correction continued influence and successful debiasing. Psychological Science in the Public Interest 13(3):106–31.CrossRefGoogle ScholarPubMed
Lindeman, M. (2011) Biases in intuitive reasoning and belief in complementary and alternative medicine. Psychology and Health 26(3):371–82.CrossRefGoogle ScholarPubMed
Matute, H., Blanco, F., Yarritu, I., Díaz-Lago, M., Vadillo, M. A. & Barberia, I. (2015) Illusions of causality: How they bias our everyday thinking and how they could be reduced. Frontiers in Psychology 6:888. doi: 10.3389/fpsyg.2015.00888.CrossRefGoogle ScholarPubMed
Otto, A. R., Skatova, A., Madlon-Kay, S. & Daw, N. D. (2015) Cognitive control predicts use of model-based reinforcement learning. Journal of Cognitive Neuroscience 27:319–33.CrossRefGoogle ScholarPubMed
Rozenblit, L. & Keil, F. (2002) The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science 26(5):521–62.CrossRefGoogle ScholarPubMed
Silverman, R. D. & Hendrix, K. S. (2015) Point: Should childhood vaccination against measles be a mandatory requirement for attending school? Yes. CHEST Journal 148(4):852–54.CrossRefGoogle Scholar
Svedholm, A. M. & Lindeman, M. (2013) Healing, mental energy in the physics classroom: Energy conceptions and trust in complementary and alternative medicine in grade 10–12 students. Science & Education 22(3):677–94.CrossRefGoogle Scholar
Taylor, E. G. & Ahn, W.-K. (2012) Causal imprinting in causal structure learning. Cognitive Psychology 65:381413.CrossRefGoogle ScholarPubMed
Thorwart, A. & Livesey, E. J. (2016) Three ways that non-associative knowledge may affect associative learning processes. Frontiers in Psychology 7:2024. doi: 10.3389/fpsyg.2016.02024.CrossRefGoogle ScholarPubMed