Hostname: page-component-7b9c58cd5d-7g5wt Total loading time: 0 Render date: 2025-03-15T04:43:41.482Z Has data issue: false hasContentIssue false

Computational modeling of analogy: Destined ever to only be metaphor?1

Published online by Cambridge University Press:  29 July 2008

Ann Speed
Affiliation:
Cognitive and Exploratory Systems Department, Sandia National Laboratories, Albuquerque, NM 87185-1011. aespeed@sandia.govhttp://www.sandia.gov
Rights & Permissions [Opens in a new window]

Abstract

The target article by Leech et al. presents a compelling computational theory of analogy-making. However, there is a key difficulty that persists in theoretical treatments of analogy-making, computational and otherwise: namely, the lack of a detailed account of the neurophysiological mechanisms that give rise to analogy behavior. My commentary explores this issue.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2008

The target article by Leech et al. and the hypothesis about the nature of analogy on which it is based, are quite compelling and different from the majority of analogy literature. However, the authors repeat a key difficulty present in other computational modeling efforts. This commentary focuses, first, on the persuasive aspects of the article and model; then, on the difficulties associated with the computational model; and finally, presents a high-level summary of the things that I believe need to be addressed to demonstrate a computational hypothesis that is both psychologically and physiologically plausible regarding how analogy emerges from the brain.

Generally, Leech et al.'s article and model are compelling for a couple of reasons. First is the hypothesis that relational priming is at the core of analogy and the parsimonious theoretical implications for that hypothesis. This theoretical construct easily accounts for adult novice-expert differences (Novick Reference Novick1988); interdomain transfer difficulty (e.g., variations on the convergence problem); the ubiquity and effortlessness of analogical transfer in everyday life (Hofstadter Reference Hofstadter, Gentner, Holyoak and Kokinov2001; Holyoak & Thagard Reference Holyoak and Thagard1995); and the variety of developmental phenomena cited in the target article. Correspondence with the knowledge accretion hypothesis also provides a convincing reason explaining why analogies are so difficult to elicit in the laboratory despite the general agreement that analogy underlies much of human cognition.

Second is the fact that the authors' computational model uses distributed representations. The debate between those who argue that cognition requires discrete symbolic representation and those who argue that physiology dictates distributed representations is an important one in the cognitive literature (Dietrich & Markman Reference Dietrich and Markman2003; Spivy Reference Spivy2007), and there is really only one other model that proposes a computational implementation of analogy using distributed representations (Eliasmith & Thagard Reference Eliasmith and Thagard2001; see Spivy Reference Spivy2007).

I agree with the authors' assertion that analogy is an emergent phenomenon. However, I would assert that it is not, at the most fundamental level, an emergent property of more basic psychological processes (e.g., relational priming), but is rather an emergent property of basic physiological processes. The authors repeatedly identify that prior models of analogy do not provide mechanisms for analogy from a developmental perspective. I also agree with this assertion, but I believe that the developmental perspective must take into account the thing that is really developing: the brain.

Specifically, the authors' implementation of their hypothesis, especially when applied to the Saddam–Hitler analogy, replicates a key deficiency in other models of analogy. This omission is the lack of an in-depth account of how human neurophysiology produces analogy (despite LISA's [i.e., the Learning and Inference with Schemas and Analogies model's] use of node oscillations to create bindings; Hummel & Holyoak Reference Hummel and Holyoak2003). Such an account must provide several details:

1. What is the nature of information representation? How are those representations physically manifested in the brain? As the authors observe, “how object attributes and relations are represented is important for modeling analogy because it constrains what else can take place” (sect. 2.2, para. 1). Although I agree with the spirit of this statement, I would argue that the knowledge representation must be firmly grounded in neurophysiological processes – that to ignore neurophysiological mechanisms as we currently understand them is to ignore important constraints on the resulting cognitive behavior.

2. How do those representations come to exist? How does the physical brain transform energetic information (e.g., photons, sound waves) into the hypothesized representation format? Using distributed representations, as in the target article, partially addresses the first issue, but it is still unclear how these distributed representations might come to exist. Nor is it clear what these representations are analogous to in the brain. This issue is not taken into detailed account in any existing computational model of analogy or higher-order cognition, as far as I am aware. Divorcing higher-order cognition from low-level sensory processes ignores the fact that the brain is connected to physical sensors and that higher-order cognitive processes necessarily use the output of those physical sensors.

3. Once these grounded representations exist, how do they give rise to, or contribute to the “calculation” of analogy (and other higher-order cognitive processes)? An extension of the first two points, this issue has been partially addressed from a theoretical perspective (e.g., Mesulam Reference Mesulam1998; cf. the literature on prefrontal cortex function) but has yet to be fully addressed computationally. However, there exists a significant body of literature that is coming closer to making this link (e.g., Barabas Reference Barabas2000; Elston Reference Elston2003; Mogami & Tanaka Reference Mogami and Tanaka2006; Rougier et al. Reference Rougier, Noelle, Braver, Cohen and O'Reilly2005; Tanaka Reference Tanaka1992). One implication for analogy is that existing computational models are totally unable to identify structural characteristics of problems on their own – a key feature in analogy-making, and the thing that differentiates experts from novices and positive from negative transfer (Gentner & Markman Reference Gentner and Markman1997; Novick Reference Novick1988; Ross Reference Ross1987; Ross & Kennedy Reference Ross and Kennedy1990). Even though the first model in the target article takes steps in this direction, it remains unclear whether the mechanisms by which it accomplishes analogy are those used by humans.

As computational models of cognitive processes are hypotheses about the underlying mechanisms giving rise to cognitive behaviors, it seems necessary to account for known neurophysiological processes. Of course, this approach will radically increase the complexity of the resulting models, but dynamical systems theorists would argue that the typical reductionist approach, which yields crisp, transparent, simple-to-analyze models, likely does not produce theories that accurately reflect human cognitive mechanisms.

And this point is the core of the issue that my colleagues and I believe modelers need to begin addressing – how does the brain, as a physical system, take raw sensory information and perform higher-order cognition using that information (e.g., solving Raven's Progressive Matrices)? And, how much of this physiological detail do we as a modeling community need to include in our models in order to accurately emulate what the brain/mind is really doing?

As Stephen Grossberg has said, theories of consciousness posited in the absence of links to neurophysiology can only ever be metaphors (Grossberg Reference Grossberg1999). How long before our computational models of analogy become more than just metaphor?

ACKNOWLEDGMENT

Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94-AL85000.

Footnotes

References

Barabas, H. (2000) Connections underlying the synthesis of cognition, memory, and emotion in primate prefrontal cortices. Brain Research Bulletin 52:319–30.CrossRefGoogle Scholar
Dietrich, E. & Markman, A. B. (2003) Discrete thoughts: Why cognition must use discrete representations. Mind and Language 18:95119.CrossRefGoogle Scholar
Eliasmith, C. & Thagard, P. (2001) Integrating structure and meaning: A distributed model of analogical mapping. Cognitive Science 25:245–86.CrossRefGoogle Scholar
Elston, G. N. (2003) Cortex, cognition and the cell: New insights into the pyramidal neuron and prefrontal function. Cerebral Cortex 13:1124–38.CrossRefGoogle ScholarPubMed
Gentner, D. & Markman, A. B. (1997) Structure mapping in analogy and similarity. American Psychologist 52:4556.CrossRefGoogle Scholar
Grossberg, S. (1999) The link between brain learning, attention, and consciousness. Consciousness and Cognition 8:144.CrossRefGoogle ScholarPubMed
Hofstadter, D. R. (2001) Epilogue: Analogy as the core of cognition. In: The analogical mind: Perspectives from cognitive science, ed. Gentner, D., Holyoak, K. J. & Kokinov, B. N., pp. 499538. MIT Press.CrossRefGoogle Scholar
Holyoak, K. J. & Thagard, P. (1995) Mental leaps: Analogy in creative thought. MIT Press.Google Scholar
Hummel, J. E. & Holyoak, K. J. (2003) A symbolic-connectionist theory of relational inference and generalization. Psychological Review 110:220–63.CrossRefGoogle ScholarPubMed
Mesulam, M. M. (1998) From sensation to cognition. Brain 121:1013–52.CrossRefGoogle ScholarPubMed
Mogami, T. & Tanaka, K. (2006) Reward association affects neuronal responses to visual stimuli in macaque TE and perirhinal cortices. Journal of Neuroscience 26(25):6761–70.CrossRefGoogle ScholarPubMed
Novick, L. R. (1988) Analogical transfer, problem similarity, and expertise. Journal of Experimental Psychology: Learning, Memory, and Cognition 14:510–20.Google ScholarPubMed
Ross, B. H. (1987) This is like that: The use of earlier problems and the separation of similarity effects. Journal of Experimental Psychology: Learning, Memory, and Cognition 13:629–39.Google Scholar
Ross, B. H. & Kennedy, P. T. (1990) Generalizing from the use of earlier examples in problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition 16:4255.Google Scholar
Rougier, N. P., Noelle, D., Braver, T. S., Cohen, J. D. & O'Reilly, R. C. (2005) Prefrontal cortex and the flexibility of cognitive control: Rules without symbols. Proceedings of the National Academy of Sciences 102:7338–43.CrossRefGoogle ScholarPubMed
Spivy, M. (2007) The continuity of mind. Oxford University Press.Google Scholar
Tanaka, K. (1992) Inferotemporal cortex and higher visual functions. Current Opinion in Neurobiology 2:502505.CrossRefGoogle ScholarPubMed