Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T04:41:12.768Z Has data issue: false hasContentIssue false

Dynamic sets of potentially interchangeable connotations: A theory of mental objects

Published online by Cambridge University Press:  29 July 2008

Alexandre Linhares
Affiliation:
Getulio Vargas Foundation, Rio de Janeiro 22250-900, Brazil. linhares@clubofrome.org.brhttp://www.intuition-sciences.com/linhares
Rights & Permissions [Opens in a new window]

Abstract

Analogy-making is an ability with which we can abstract from surface similarities and perceive deep, meaningful similarities between different mental objects and situations. I propose that mental objects are dynamically changing sets of potentially interchangeable connotations. Unfortunately, most models of analogy seem devoid of both semantics and relevance-extraction, postulating analogy as a one-to-one mapping devoid of connotation transfer.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2008

Leech et al. provide an ambitious framework with which to view analogy as relational priming. I find the idea full of potential, and I fully agree with a number of points. Having worked over the last decade on the question of analogy and abstractions in chess cognition, I share with the authors the following conclusions: (1) that analogies lie at the core of cognition (Hofstadter Reference Hofstadter, Gentner, Holyoak and Kokinov2001); (2) that analogies let us expand our knowledge; and (3) that, as the target article proposes, relations can be viewed as transformations. These are certainly true in a chess player's mind (Linhares Reference Linhares2005; submitted; Linhares & Brum Reference Linhares and Brum2007).

There are, however, implicit, unstated, assumptions that should be seriously faced by any computational model of cognition, as failure to do so generally leads to simplistic and semantically vacuous models.

First, there is the representational module assumption. The model presented by Leech et al. supposedly makes analogies between apples, lemons, Hitler, Hussein, wars, and the like. The representational module assumption presupposes that a separate representation module would provide these mental objects (e.g., entities in short-term memory [STM]) as input to Leech et al.'s computational model. It is an unfeasible idea for which there is no space to repeat previous arguments (Chalmers et al. Reference Chalmers, French and Hofstadter1992; French Reference French, Riegler, Peschl and von Stein2000; Hofstadter & the Fluid Analogies Research Group Reference Hofstadter1995). Hence, I will consider a related, second assumption: metaphysical realism, a doctrine which posits that objects (bounded entities), and their properties and relations, are independent of observers and, hence, can have one static description (Linhares Reference Linhares2000; Putnam Reference Putnam1981; Smith Reference Smith1996).

As anyone who has tried to produce a robot navigating real-world settings knows, such robots cannot see people, animals, chairs, or Coke cans, as humans do effortlessly. Robots deal with wave and signal processing. It takes effort to carve waves into mental objects: This process involves setting boundaries on waves (e.g., speech recognition, visual segmentation, etc.), finding properties of these newly bounded entities, finding relations to other bounded entities, and classifying these into concepts.

Objects, properties, and relations are context- and observer-dependent. Consider DNA: DNA is like a staircase. DNA is like a will. DNA is like a fingerprint. DNA is like a zipper. DNA is like a computer program. DNA is like a tape. DNA is like a train track that is pulled apart by the train. DNA is like a tiny instruction manual for your cells.

How can a molecule acquire the properties and relations of objects so semantically apart as are train tracks, computer code, fingerprints, staircases? What are mental objects made of? I suggest that mental objects are dynamic sets of connotations, and that connotations are potentially interchangeable – two characteristics which are ignored by most cognitive models (including Leech et al.'s).

Mental objects, I suggest, are dynamically built. Each concept, and each instance of a concept, has a set of connotations attached to it. But this set is not fixed. It changes dynamically because of contextual pressures. And what are such connotations like? They are either (i) rules for carving waves (sounds into separate spoken words, an image into a set of objects, etc.), (ii) relations between mental objects (a chess queen that pins a king), or (iii) properties of particular objects (red-ness). Most importantly, these connotations are potentially interchangeable between objects. This is why DNA as a mental object can acquire so many characteristics that are found far beyond the realm of chemistry.

Analogies are mechanisms by which a mental object acquires connotations from different mental objects. This theory stems from Hofstadter and the Fluid Analogies Research Group (Reference Hofstadter1995), is close to the model of Fauconnier and Turner (Reference Fauconnier and Turner2002), and is different from a one-to-one mapping.

Leech et al.'s model has promising ideas, but does not account for this. It assumes that the perfect set of mental objects has been constructed a priori, independently of the analogy in question. Hitler had a mustache. Hussein had a mustache. Why doesn't their analogy consider this mapping? Answer: because it is irrelevant. But who is to say so? Why is this irrelevant, even though it would map beautifully? War in Iraq is hard because it is hot; roads have literally melted as American tanks drove by (Pagonis & Cruikshank Reference Pagonis and Cruikshank1994). War in Germany is hard because it is cold, and night fires are enemy-attracting (Ambrose Reference Ambrose1998). Who is to decide that this is not relevant?

By providing perfectly built objects a priori, this work does not reflect the dynamic nature of mental objects. It succumbs to the Eliza effect: nothing is known about Hitler besides a token. Although readers see the full imagery of Adolf Hitler come to mind, with the enormous set of powerful connotations that name brings up (Nazism, the Aryan race, the swastika, the propaganda, WWII, Auschwitz, etc.), the model is satisfied with a single token without any connotations attached. I invite readers to swap all tokens (apple, Hitler, etc.) with randomly chosen Greek letters and reread the target article. However interesting the psychological constraints posed by the authors, and however rigorous their attempt to remain close to biological plausibility, the model never makes, in any significant sense of the word, the analogies humans effortlessly do. The one-to-one mapping model has no connotation transfer and does not reflect the dynamic nature of mental objects.

References

Ambrose, S. E. (1998) Citizen soldiers. Simon & Schuster.Google Scholar
Chalmers, D. J., French, R. M. & Hofstadter, D. R. (1992) High-level perception, representation, and analogy: A critique of artificial intelligence methodology. Journal of Experimental and Theoretical Artificial Intelligence 4:185211.CrossRefGoogle Scholar
Fauconnier, G. & Turner, M. (2002) The way we think: Conceptual blending and the mind's hidden complexities. Basic Books.
French, R. M. (2000) When coffee cups are like old elephants, or why representation modules don't make sense. In: Understanding representation in the cognitive sciences: Does representation need reality? ed. Riegler, A., Peschl, M. & von Stein, A., pp. 93100. Springer.Google Scholar
Hofstadter, D. R. (2001) Epilogue: Analogy as the core of cognition. In: The analogical mind: Perspectives from cognitive science, ed. Gentner, D., Holyoak, K. J. & Kokinov, B. N., pp. 499538. MIT Press.CrossRefGoogle Scholar
Hofstadter, D. R. & the Fluid Analogies Research Group (1995) Fluid concepts and creative; computer models of the fundamental mechanisms of thought analogies. Basic Books.Google Scholar
Linhares, A. (2000) A glimpse at the metaphysics of Bongard problems. Artificial Intelligence 121 (1–2):251–70.CrossRefGoogle Scholar
Linhares, A. (2005) An active symbols theory of chess intuition. Minds and Machines 15(2):131–81.CrossRefGoogle Scholar
Linhares, A. (submitted) Decision-making and strategic thinking through analogies.Google Scholar
Linhares, A. & Brum, P. (2007) Understanding our understanding of strategic scenarios: What role do chunks play? Cognitive Science 31(6):9891007.CrossRefGoogle ScholarPubMed
Pagonis, W. G. & Cruikshank, J. L. (1994) Moving mountains. Harvard Business School Press.Google Scholar
Putnam, H. (1981) Reason, truth, and history. Cambridge University Press.Google Scholar
Smith, B. C. (1996) On the origin of objects. MIT Press.Google Scholar