Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-11T23:54:00.758Z Has data issue: false hasContentIssue false

Darwin's last word: How words changed cognition

Published online by Cambridge University Press:  14 May 2008

Derek Bickerton
Affiliation:
Department of Linguistics, University of Hawaii, Honolulu, HI 96822. derbick@hawaii.rr.comwww.derekbickerton.com
Rights & Permissions [Opens in a new window]

Abstract

Although Penn et al. make a good case for the existence of deep cognitive discontinuity between humans and animals, they fail to explain how such a discontinuity could have evolved. It is proposed that until the advent of words, no species had mental representations over which higher-order relations could be computed.

Type
Open Peer Commentary
Copyright
Copyright ©Cambridge University Press 2008

Kudos to Penn et al. for admitting what, if it were not politically incorrect (somewhere between Holocaust denial and rejection of global warming), would be obvious to all: the massive cognitive discontinuity between humans and all other animals. Since “kudos” has apparently become a count noun, how many kudos? I would say, two-and-a-half out of a possible four; that is averaging four for their analysis of the problem and one for their solution.

Penn et al. make clear that there are two quite separate human–nonhuman discontinuities: communicative and cognitive. What are the odds, in a single, otherwise unremarkable lineage of terrestrial apes, against two such dramatic discontinuities evolving independently? Yet Penn et al. dismiss three variants of the notion that language was what enhanced human cognition.

This is not their mistake, however. They are right to reject all three variants for the reasons stated. Their mistake lies in assuming that these proposals have exhausted the ways in which language might have influenced cognition, and in not looking more closely at what language did to the brain – the “rewiring” they admit it caused. Instead, they propose a solution – “relational reinterpretation,” supported by the computational model LISA (Learning and Inference with Schemas and Analogies) – which explains distinctively human cognition in the same way Molière's “dormitive property” explains the narcotic effect of opium.

What does “relational reinterpretation” do, beyond renaming the phenomena it seeks to account for? The term may form a convenient summation of what the mind has somehow to do to achieve the results Penn et al. describe, and LISA may represent one possible way of achieving them. But the real issue is, how and when and why did “relational reinterpretation” evolve? To what selective pressures did it respond? And why didn't those pressures affect other, closely related species?

Penn et al. have no answers, because they share with most linguists and cognitive scientists a reluctance to grapple with what is known about human evolution. The many gaps and ambiguities in that record license extreme caution in handling it, but not, surely, ignoring it altogether. What the record spells out unambiguously are the radical differences in lifestyles, foraging patterns, nutrition, and relations with other species that separated human ancestors from ancestors of modern apes. Whether seeking origins for language or human cognition, it is surely among these differences – and their behavioral consequences – that we must start. Otherwise, we cannot explain why we are not just one out of several “intelligent” species on this planet.

Parsimony and evolutionary principles both suggest that one major discontinuity begat the other; here is how this could have happened.

The capacity to perceive and exploit higher-order relations between mental representations depends crucially on having the right kind of mental representations to begin with, a kind that can be manipulated, concatenated, hierarchically structured, linked at different levels of abstraction, and used to build structured chains of thought. Are nonhuman representations of this kind? If they are not, Penn et al.'s problem disappears: Other animals lack the cognitive powers of humans simply because they have no units over which higher-order mechanisms could operate. The question then becomes how we acquired the right kind of representations.

Suppose all nonhuman representations are distributed. This means, to take a concrete example, that although an animal might have representations corresponding to “what a leopard looks like” (numerous variants), “what a leopard sounds like” (ditto), “how a leopard moves,” “what a leopard smells like,” and so on, there is simply no place in the brain where these all come together to yield a single, comprehensive “leopard.” Instead, each representation would be stored in its appropriate brain area (auditory, visual, etc.) and be directly linked to parts of the motor system so that the firing of any (sufficient subset) of these representations would activate the appropriate leopard-reaction program. If Penn et al. have any evidence – experimental or ethological – inconsistent with this proposal, I hope they include it in their Response to Commentary.

What would an animal need, beyond this? It would still enable categorization of presented stimuli, even ones as exotic as fish to pigeons (Herrnstein Reference Herrnstein1985); pigeons, having stored visual features of fish, would simply peck whenever a sufficient subset occurred, without requiring any generalized concept of fish. The only limitation would be that the animal would not be able to think about leopards, or fish, in their absence. (It is perhaps not coincidental that virtually all animal communication relates to the here-and-now.)

References

Herrnstein, R. J. (1985) Riddles of natural categorization. Philosophical Transactions of the Royal Society of London B 308:129–44.Google Scholar