Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-12T00:16:44.731Z Has data issue: false hasContentIssue false

Theories or fragments?

Published online by Cambridge University Press:  10 November 2017

Nick Chater
Affiliation:
Behavioural Science Group, Warwick Business School, University of Warwick, Coventry CV4 7AL, United Kingdom. Nick.Chater@wbs.ac.ukhttp://www.wbs.ac.uk/about/person/nick-chater/
Mike Oaksford
Affiliation:
Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom. m.oaksford@bbk.ac.ukhttp://www.bbk.ac.uk/psychology/our-staff/mike-oaksford

Abstract

Lake et al. argue persuasively that modelling human-like intelligence requires flexible, compositional representations in order to embody world knowledge. But human knowledge is too sparse and self-contradictory to be embedded in “intuitive theories.” We argue, instead, that knowledge is grounded in exemplar-based learning and generalization, combined with high flexible generalization, a viewpoint compatible both with non-parametric Bayesian modelling and with sub-symbolic methods such as neural networks.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Lake et al. make a powerful case that modelling human-like intelligence depends on highly flexible, compositional representations, to embody world knowledge. But will such knowledge really be embedded in “intuitive theories” of physics or psychology? This commentary argues that there is a paradox at the heart of the “intuitive theory” viewpoint, that has bedevilled analytic philosophy and symbolic artificial intelligence: human knowledge is both (1) extremely sparse and (2) self-contradictory (e.g., Oaksford & Chater Reference Oaksford and Chater1991).

The sparseness of intuitive knowledge is exemplified in Rozenblit and Keil's (Reference Rozenblit and Keil2002) discussion of the “illusion of explanatory depth.” We have the feeling that we understand how a crossbow works, how a fridge stays cold, or how electricity flows around the house. Yet, when pressed, few of us can provide much more than sketchy and incoherent fragments of explanation. Therefore, our causal models of the physical world appear shallow. The sparseness of intuitive psychology seems at least as striking. Indeed, our explanations of our own and others' behavior often appear to be highly ad hoc (Nisbett & Ross Reference Nisbett and Ross1980).

Moreover, our physical and psychological intuitions are also self-contradictory. The foundations of physics and rational choice theory have consistently shown how remarkably few axioms (e.g., the laws of thermodynamics, the axioms of decision theory) completely fix a considerable body of theory. Yet our intuitions about heat and work, or probability and utility, are vastly richer and more amorphous, and cannot be captured in any consistent system (e.g., some of our intuitions may imply our axioms, but others will contradict them). Indeed, contradictions can also be evident even in apparent innocuous mathematical or logical assumptions (as illustrated by Russell's paradox, which unexpectedly exposed a contradiction in Frege's attempted logical foundation for mathematics [Irvine & Deutsch Reference Irvine, Deutsch and Zalta2016]).

The sparse and contradictory nature of our intuition explains why explicit theorizing requires continually ironing out contradictions, making vague concepts precise, and radically distorting or replacing existing concepts. And the lesson of two and half millennia of philosophy is arguable, that clarifying even the most basic concepts, such as “object” or “the good” can be entirely intractable, a lesson re-learned in symbolic artificial intelligence. In any case, the raw materials for this endeavor – our disparate intuitions – may not be properly viewed as organized as theories at all.

If this is so, how do we interact so successfully in the physical and social worlds? We have experience, whether direct, or by observation or instruction – of crossbows, fridges, and electricity – to be able to interact with them in familiar ways. Indeed, our ability to make sense of new physical situations often appears to involve creative extrapolation from familiar examples: for example, assuming that heavy objects will fall faster than light objects, even in a vacuum, or where air resistance can be neglected. Similarly, we have a vast repertoire of experience of human interaction, from which we can generalize to new interactions. Generalization from such experiences, to deal with new cases, can be extremely flexible and abstract (Hofstadter Reference Hofstadter, Gentner, Holyoak and Kokinov2001). For example, the perceptual system uses astonishing ingenuity to construct complex percepts (e.g., human faces) from highly impoverished signals (e.g., Hoffman Reference Hoffman2000; Rock Reference Rock1983) or to interpret art (Gombrich Reference Gombrich1960).

We suspect that the growth and operation of cognition are more closely analogous to case law than to scientific theory. Each new case is decided by reference to the facts of that present case and to ingenious and open-ended links to precedents from past cases; and the history of cases creates an intellectual tradition that is only locally coherent, often ill-defined, but surprisingly effective in dealing with a complex and ever-changing world. In short, knowledge has the form of a loosely interlinked history of reusable fragments, each building on the last, rather than being organized into anything resembling a scientific theory.

Recent work on construction-based approaches to language exemplify this viewpoint in the context of linguistics (e.g., Goldberg Reference Goldberg1995). Rather than seeing language as generated by a theory (a formally specified grammar), and the acquisition of language as the fine-tuning of that theory, such approaches see language as a tradition, where each new language processing episode, like a new legal case, is dealt with by reference to past instances (Christiansen & Chater Reference Christiansen and Chater2016). In both law and language (see Blackburn Reference Blackburn1984), there will be a tendency to impose local coherence across similar instances, but there will typically be no globally coherent theory from which all cases can be generated.

Case instance or exemplar-based theorizing has been widespread in the cognitive sciences (e.g., Kolodner Reference Kolodner1993; Logan Reference Logan1988; Medin & Shaffer Reference Medin and Schaffer1978). Exploring how creative extensions of past experience can be used to deal with new experience (presumably by processes of analogy and metaphor rather than deductive theorizing from basic principles) provides an exciting challenge for artificial intelligence, whether from a non-parametric Bayesian standpoint or a neural network perspective, and is likely to require drawing on the strengths of both.

ACKNOWLEDGMENTS

N.C. was supported by ERC Grant 295917-RATIONALITY, the ESRC Network for Integrated Behavioural Science (Grant ES/K002201/1), the Leverhulme Trust (Grant RP2012-V-022], and Research Councils UK Grant EP/K039830/1.

References

Blackburn, S. (1984) Spreading the word: Groundings in the philosophy of language. Oxford University Press.Google Scholar
Christiansen, M. H. & Chater, N. (2016) Creating language: Integrating evolution, acquisition, and processing. MIT Press.Google Scholar
Goldberg, A. E. (1995) Constructions: A construction grammar approach to argument structure. University of Chicago Press.Google Scholar
Gombrich, E. (1960) Art and illusion. Pantheon Books.Google Scholar
Hoffman, D. D. (2000) Visual intelligence: How we create what we see. W. W. Norton.Google Scholar
Hofstadter, D. R. (2001) Epilogue: Analogy as the core of cognition. In: The analogical mind: perspectives from cognitive science, ed. Gentner, D., Holyoak, K. J. & Kokinov, B. N., pp. 499538. MIT Press.Google Scholar
Irvine, A. D. & Deutsch, H. (2016) Russell's paradox. In: The Stanford encyclopedia of philosophy (Winter 2016 Edition), ed. Zalta, E. N.. Available at: https://plato.stanford.edu/archives/win2016/entries/russell-paradox.Google Scholar
Kolodner, J. (1993) Case-based reasoning. Morgan Kaufmann.Google Scholar
Logan, G. D. (1988) Toward an instance theory of automatization. Psychological Review 95(4):492527.Google Scholar
Medin, D. L. & Schaffer, M. M. (1978) Context theory of classification learning. Psychological Review 85(3):207–38.Google Scholar
Nisbett, R. E. & Ross, L. (1980) Human inference: Strategies and shortcomings of social judgment. Prentice-Hall. ISBN 0-13-445073-6.Google Scholar
Oaksford, M. & Chater, N. (1991) Against logicist cognitive science. Mind and Language 6(1):138.CrossRefGoogle Scholar
Rock, I. (1983) The logic of perception. MIT Press.Google Scholar
Rozenblit, L. & Keil, F. (2002) The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science 26(5):521–62.CrossRefGoogle ScholarPubMed