Leech et al. provide an ambitious framework with which to view analogy as relational priming. I find the idea full of potential, and I fully agree with a number of points. Having worked over the last decade on the question of analogy and abstractions in chess cognition, I share with the authors the following conclusions: (1) that analogies lie at the core of cognition (Hofstadter Reference Hofstadter, Gentner, Holyoak and Kokinov2001); (2) that analogies let us expand our knowledge; and (3) that, as the target article proposes, relations can be viewed as transformations. These are certainly true in a chess player's mind (Linhares Reference Linhares2005; submitted; Linhares & Brum Reference Linhares and Brum2007).
There are, however, implicit, unstated, assumptions that should be seriously faced by any computational model of cognition, as failure to do so generally leads to simplistic and semantically vacuous models.
First, there is the representational module assumption. The model presented by Leech et al. supposedly makes analogies between apples, lemons, Hitler, Hussein, wars, and the like. The representational module assumption presupposes that a separate representation module would provide these mental objects (e.g., entities in short-term memory [STM]) as input to Leech et al.'s computational model. It is an unfeasible idea for which there is no space to repeat previous arguments (Chalmers et al. Reference Chalmers, French and Hofstadter1992; French Reference French, Riegler, Peschl and von Stein2000; Hofstadter & the Fluid Analogies Research Group Reference Hofstadter1995). Hence, I will consider a related, second assumption: metaphysical realism, a doctrine which posits that objects (bounded entities), and their properties and relations, are independent of observers and, hence, can have one static description (Linhares Reference Linhares2000; Putnam Reference Putnam1981; Smith Reference Smith1996).
As anyone who has tried to produce a robot navigating real-world settings knows, such robots cannot see people, animals, chairs, or Coke cans, as humans do effortlessly. Robots deal with wave and signal processing. It takes effort to carve waves into mental objects: This process involves setting boundaries on waves (e.g., speech recognition, visual segmentation, etc.), finding properties of these newly bounded entities, finding relations to other bounded entities, and classifying these into concepts.
Objects, properties, and relations are context- and observer-dependent. Consider DNA: DNA is like a staircase. DNA is like a will. DNA is like a fingerprint. DNA is like a zipper. DNA is like a computer program. DNA is like a tape. DNA is like a train track that is pulled apart by the train. DNA is like a tiny instruction manual for your cells.
How can a molecule acquire the properties and relations of objects so semantically apart as are train tracks, computer code, fingerprints, staircases? What are mental objects made of? I suggest that mental objects are dynamic sets of connotations, and that connotations are potentially interchangeable – two characteristics which are ignored by most cognitive models (including Leech et al.'s).
Mental objects, I suggest, are dynamically built. Each concept, and each instance of a concept, has a set of connotations attached to it. But this set is not fixed. It changes dynamically because of contextual pressures. And what are such connotations like? They are either (i) rules for carving waves (sounds into separate spoken words, an image into a set of objects, etc.), (ii) relations between mental objects (a chess queen that pins a king), or (iii) properties of particular objects (red-ness). Most importantly, these connotations are potentially interchangeable between objects. This is why DNA as a mental object can acquire so many characteristics that are found far beyond the realm of chemistry.
Analogies are mechanisms by which a mental object acquires connotations from different mental objects. This theory stems from Hofstadter and the Fluid Analogies Research Group (Reference Hofstadter1995), is close to the model of Fauconnier and Turner (Reference Fauconnier and Turner2002), and is different from a one-to-one mapping.
Leech et al.'s model has promising ideas, but does not account for this. It assumes that the perfect set of mental objects has been constructed a priori, independently of the analogy in question. Hitler had a mustache. Hussein had a mustache. Why doesn't their analogy consider this mapping? Answer: because it is irrelevant. But who is to say so? Why is this irrelevant, even though it would map beautifully? War in Iraq is hard because it is hot; roads have literally melted as American tanks drove by (Pagonis & Cruikshank Reference Pagonis and Cruikshank1994). War in Germany is hard because it is cold, and night fires are enemy-attracting (Ambrose Reference Ambrose1998). Who is to decide that this is not relevant?
By providing perfectly built objects a priori, this work does not reflect the dynamic nature of mental objects. It succumbs to the Eliza effect: nothing is known about Hitler besides a token. Although readers see the full imagery of Adolf Hitler come to mind, with the enormous set of powerful connotations that name brings up (Nazism, the Aryan race, the swastika, the propaganda, WWII, Auschwitz, etc.), the model is satisfied with a single token without any connotations attached. I invite readers to swap all tokens (apple, Hitler, etc.) with randomly chosen Greek letters and reread the target article. However interesting the psychological constraints posed by the authors, and however rigorous their attempt to remain close to biological plausibility, the model never makes, in any significant sense of the word, the analogies humans effortlessly do. The one-to-one mapping model has no connotation transfer and does not reflect the dynamic nature of mental objects.