Hostname: page-component-6bf8c574d5-2jptb Total loading time: 0 Render date: 2025-02-23T00:15:56.322Z Has data issue: false hasContentIssue false

Language, Perception and Action. How Words are Grounded in the Brain

Published online by Cambridge University Press:  01 October 2008

Marc Jeannerod*
Affiliation:
Institut des Sciences Cognitives, UMR 5230 CNRS/Université Claude Bernard, 67 Boulevard Pinel, 69675, Bron Cedex, France
Rights & Permissions [Opens in a new window]

Abstract

Language processing is grounded in brain function. Words of different semantic categories are processed in different cortical areas. Several examples of this distributed processing are given: colour words are processed in visual areas, whereas action words are processed in motor areas. The processing of action words in described in more details. A pathological condition, Parkinson’s disease, is used as an illustration of a motor impairment that selectively affects the comprehension of action words. This comprehension impairment is attributed to a difficulty in accessing the procedural knowledge carried by this specific class of words.

Type
Focus: The Origin of Language
Copyright
Copyright © Academia Europaea 2008

This paper reactivates a classical debate about language processing by the brain. Language is thought to be understood through a series of steps that ultimately allow access to the meaning of heard or read words and sentences. Accordingly, a word would be first recognized by its morphology (its phonology and its orthography): this is what makes a listener able to repeat a word that he/she hears from a locutor, even without understanding its meaning. The same word would then be recognized as an item in our lexicon, a specialized memory store: it is when the word matches a memory trace in the lexicon that it can be recognized and distinguished from unknown phonological items (e.g. a non-word or a word from a foreign language). A traditional view holds that language processing is limited to a set of language-specific neural structures. Accordingly, these successive operations for language decoding would be shared by the classical anterior and posterior speech areas within the left hemisphere: these are the Broca and the Wernicke areas, respectively, according to the terminology used by the neurologists. Indeed, this view is supported by clinical observations in patients with language disorders: for example, aphasia of the ‘sensory’ type, whenever a patient fails to understand spoken words and sentences, is generally a consequence of lesion in the medial part of the left temporal lobe (the Wernicke area), whereas aphasia of the ‘motor’ type, characterized by a failure of generating spoken words, is a consequence of lesion of the left inferior part of the frontal lobe (the Broca area).

Alternatively, a different view holds that language processing has to be widely distributed across the cerebral cortex. The effects of lesions that massively affect the language-specific areas can contribute only a little to our understanding of language processing. Instead, the idea of a network, including areas outside the language-specific regions, comes closer to the main function of language, which is to manipulate and to express knowledge. The early proponents of this ‘associative’ view, at the end of the nineteenth century (the so-called ‘diagram-makers’) were also neurologists studying language disorders.Reference Freud1 They had made the remark that words can be deciphered by the auditory as well as the visual modality, and can be expressed in speaking as well as in writing. Thus, they thought that there should be association pathways between the many brain localizations involved in this multimodal processing (see Figure 1). The associative view, however, remained confined within a purely linguistic system, even though this system now included a broader network. In this paper, it will be proposed that language processing extends far beyond the system classically devoted to the function of language. Specifically, it will be proposed that the meaning of words is grounded in neural processes that are also responsible for perception and action.

Figure 1 A classical diagram after Jean-Martin Charcot. In one of his Tuesday lectures at the Hopital de la Salpétrière (Paris), Charcot presented this diagram for explaining how the sound or the sight of a bell, as well as the auditory or visual perception of the corresponding word (cloche) was processed in different sensory systems. Once recognized, the bell or the word cloche was transferred into motor outputs for speaking or writing.

The structure of semantic memory

Arguments as to the latter view arise from several sources. One of these arguments is that the memory for words (the lexicon) is not an undifferentiated store where the words accumulate; instead, it has an internal organization. Words appear to be classified in the lexicon according to the semantic category to which they refer. To illustrate this point, imagine that the words depicting concrete objects are distributed in different categories according to the object types: objects referring to a putative semantic category of ‘manufactured objects’ and to a sub-category of ‘furniture’ would be stored next to each other, whereas words referring to a putative semantic category of ‘biological objects’ and to a sub-category of ‘fruits’ would be stored in another part of the memory. Although this description is purely metaphorical (we ignore how and where in the brain this organization is effected), it is supported by experimental data. Psycholinguists using the now classical ‘priming’ paradigm found that the brief and unnoticed presentation of a word of a given semantic category facilitates the subsequent recognition of a target word from the same category.Reference Meyer and Schwaneveldt2 In these experiments, real words and ‘non-words’ (e.g. consonant strings) are presented to the subjects; recognition is measured as the time to make a lexical decision about a word (‘is this a real word or not?’). For example, the brief presentation of the word table will facilitate the lexical decision (decision time will be shorter) about a word referring to another piece of furniture (e.g. chair), compared with the decision for a word referring to an object from a different category (e.g. apple). This striking fact that words are stored in memory according to their semantic categories suggests a distributed organization of the access to meaning, beyond language specific mechanisms. Once again, an indication in this direction is given by a priming experiment. Indeed, facilitation of the lexical decision for a word can be obtained, not only by the brief auditory presentation of a word within the same semantic category, but also by the visual presentation of a picture representing an object within that category: the picture of a table will facilitate the lexical decision for words from the semantic field of furniture.

The distinction between different semantic categories is also a neurological reality. Clinical observation has shown that brain-damaged patients may selectively fail to understand words, or to find the name of objects, from a given category. Warrington and Shallice, in 1984 reported the cases of four patients with medial temporal lesions who showed this striking dissociation: the patients were able to correctly name pictures representing inanimate objects, such as houses, whereas they failed to name pictures representing living things, such as animals or food. The same patients were found to have very similar difficulties in the auditory modality: they could not comprehend spoken words referring to the names of living things. Warrington and Shallice suggested that such category-specific naming and understanding impairments might correspond to the alteration of independent modality-specific semantic systems within specialized areas for semantic processing.Reference Warrington and Shallice3 However, the most definite arguments as to the distributed representation of semantic processing are provided by experiments using neuro-imaging techniques in healthy subjects. These experiments have mainly explored the semantic processing of words referring to the domains of perception and action. This will be illustrated in the next section.

Sensory motor systems and language processing

Thus, the processing of words is intimately linked to the processing of the reality that they represent. Indeed, we use words to express what we experience or feel, because the words we use, in addition to their explicit definition, convey a representation of these experiences or feelings. In other terms, there is still another layer of processing, beyond the morphological and lexical recognition of words, which gives access to their ‘covert’ meaning. Take for example hearing or reading the word red. Not only does this word define a particular colour (‘some roses are red’), it also refers to the sensation that we have when we see something red. The former sense is shared by other people and can be used for communicating with them; the later corresponds to our private experience of seeing red. This example stresses the fact that the meaning of a word cannot be limited to its dictionary definition. The dictionary definition of the word red only captures our conceptual knowledge about that word, but it does not capture other aspects of its meaning, those which are relevant to perceiving and behaving in a coloured world.

The covert meaning of a word such as red is reflected in the way it is processed by the brain. Recent experiments have shown that presenting words indexing different sensory modalities (a colour word indexes the visual modality, a word referring to touch indexes the tactile modality, etc) activates the brain areas corresponding to the sensory system indexed by each word: colour names were associated with increased activity in a ventral area at the junction of occipital and temporal cortex, an area shown previously to be involved in colour knowledge, whereas tactile qualities were associated with activation within the somatosensory cortex.Reference Goldberg, Perfetti and Schneider4 Experiments like this one reveal the existence of functional networks where the information relevant to the semantic attributes of each single word is processed. Further experiments will be needed to determine the precise role of these networks in the comprehension of words. For example, would a patient with a brain lesion affecting his perception of colours (the condition called ‘colour blindness’ following a bilateral ventral occipito-temporal lesion) show a specific deficit in making lexical decisions for colour words?

The covert meaning of action words

We will develop this idea with another category of words, those that refer to actions. Experiments very similar to those with the sensory modalities have shown that hearing an action word is associated with the activation, not only of the language-specific areas in the temporal lobe, but also of areas in the motor system. Action words, and specially action verbs, are frequently associated with a body part: the verb grasp refers to a hand action, whereas the verb kick refers to the foot and the verb bite refers to the mouth. Accordingly, neuro-imaging experiments have shown that the region of motor cortex activated while hearing an action word corresponds to the cortical representation of the movements involved in the action indexed by the word: hearing the verb grasp activates the hand motor area, whereas hearing the verb kick activates the foot area.Reference Hauk, Johnsrude and Pulvermüller5 These changes in cortical activity take place very early (ca 150 ms) after hearing the word, which precludes the possible role of conscious factors, such as forming a mental image of the action, for example.

Action words describe actions, but this description extends beyond the conceptual knowledge of the action to which they refer. In addition to their consciously accessible definition stored in the appropriate part of the semantic memory, action words are also endowed with an implicit knowledge about how to perform the action they describe or, in other terms, about the procedure involved. This procedural knowledge is distinct from conceptual knowledge, in the sense that it can little, if at all, be verbalized, and therefore cannot be consciously accessed by the subject. The storage and retrieval of learned movements and skills pertains to a procedural memory, distinct from semantic memory where conceptual knowledge is stored. Clinical studies have abundantly demonstrated the dissociable character of procedural memory with respect to other forms of memory. Patients with amnesia following cortical atrophy or lesions of the hippocampus typically show impairment in their declarative memory (semantic as well as episodic), contrasting with a strikingly spared procedural memory.Reference Milner, Corkin and Teuber6

Procedures stored in procedural memory are automatically accessed whenever an action is to be executed. Consider, for example, a situation where a subject is instructed simply to make a judgment about whether a given action (e.g. grasping a big object with a single hand) is feasible or not, without attempting to execute it. This type of experiment has revealed (by measuring the time to give the responses) that the subject, before giving his response, unconsciously simulates the action by rehearsing the appropriate motor procedures. Response time varies as a function of the difficulty of the motor task, which suggests that kinematic rules and biomechanical constraints are encoded as parts of these procedures.Reference Frak, Paulignan and Jeannerod7 Indeed, and not surprisingly, this process of simulation is associated with physiological changes in the motor cortex: motor cortical regions become activated in correspondence with the simulated action.Reference Ehrsson, Geyer and Naito8 What is not surprising either is that these effects are seen, not only during simulation of one’s own actions, but also during observation of actions performed by other agents. The now familiar concept of ‘mirror neurons’, developed by the Rizzolatti group since the 1990s,Reference Rizzolatti, Fadiga, Gallese and Fogassi9, Reference Rizzolatti and Fabbri-Destro10 supports the idea that actions are represented in certain brain structures (the premotor cortex and the posterior parietal cortex) irrespective of who executes them: by simulating the action of the other agent within one’s own brain, one becomes able to understand what this agent is doing.

Thus, there is close similarity between the neural mechanisms associated with observing an action and those associated with hearing action words. Action verbs like hit, grasp, run or tap, all refer to the performance of a different action. Provided these actions are familiar to the listener, the corresponding verbs refer to the way the actions can be performed. Understanding action words is therefore a subclass of understanding actions. In man, the procedures stored in procedural memory might have become associated with the corresponding action words through learning. As a result of this association, hearing the word rehearses the procedure and activates the motor cortex. An interesting case is that of the category of words defining tools. Tools have a special status, in the sense that they are objects that refer to a potential action. Several experiments have shown that hearing tool names, as well as merely observing tools, are both associated with activation of the motor system.Reference Meyer and Schwaneveldt2, Reference Martin, Wiggs, Ungerleider and Haxby11

Action and language comprehension

One of the main features of language processing is reciprocity: production and comprehension of words must be parts of the same process in order for the locutors who speak to each other to be mutually understandable. This argument of reciprocity of language production and comprehension is indeed very similar to that for the reciprocity of executing and understanding actions, as illustrated by the existence of the mirror neurons. The fact that these neurons are activated both during execution and observation of the same action suggests that they use the same code for signalling the action of the agent and that of the observer. What an observer perceives from the other, he can do, and vice versa. Similarly for the case for language: what a listener hears from a speaker, he can understand, and vice versa. A classical experiment demonstrates that hearing speech sounds produces in the listener an activation of motor cortex controlling the muscles of the vocal tract. For example, listening to words involving tongue muscles automatically activates the motor commands to these muscles.Reference Fadiga, Craighero, Buccino and Rizzolatti13 This result is confirmed by a simple self-observation: if you pronounce the phoneme /ba/ while observing the mouth of another speaker doing the same thing, you feel the task to be much easier than if the speaker you observe pronounces a different phoneme (e.g. /da/).Reference Kerzel and Bekkering14 Indeed, these effects supporting the existence of a production/comprehension matching system for language can also be explained using the concept of mirror neurons. Monkey mirror neurons are located in an area of ventral premotor cortex which, in man, roughly corresponds to one of the major language areas, the Broca’s area in the inferior part of the frontal lobe. Interestingly, the human Broca area has also been found to be involved in many aspects of action representation at large: making decisions about the feasibility of an action,Reference Schubotz and von Cramon15 observing actions,Reference Grafton, Arbib, Fadiga and Rizzolatti16 naming and observing tools,Reference Martin, Wiggs, Ungerleider and Haxby11, Reference Perani, Cappa, Bettinardi, Bressi, Gorno-Tempini, Matarrese and Fazio17 or generating action words.Reference Tettamenti, Buccino, Saccuman, Gallese, Danna, Scifo, Fazio, Rizzolatti, Cappa and Perani18

Our present attempt, however, goes beyond language specific mechanisms. It is part of a broader scheme where language processing in general is distributed through sensory and motor areas. More specifically the processing of action words is distributed across the motor system. Thus, at this point, the key question is the contribution of these motor mechanisms to the comprehension of action words. One way to answer this question is to study word comprehension in patients with a motor disease. The argument goes as follows: if the activation of the motor system contributes to the comprehension of action words, then patients where this activation is pathologically weakened should have difficulties in a task testing word comprehension, and these difficulties should be limited to action words.

In a recent study, Véronique Boulenger and her colleaguesReference Boulenger, Mechtouff, Thobois, Broussolle, Jeannerod and Nazir19 have addressed this question in patients with Parkinson’s disease. Parkinson’s disease affects the motor cortex indirectly. It is the consequence of degeneration of a group of neurons located in the diencephalon (in the substantia nigra). These neurons, through a complex circuitry involving the basal ganglia, control the activity of motor cortex whenever a voluntary movement is to be executed. In their absence, motor cortex remains deactivated, which is paralleled by a difficulty to move (akinesia), the most typical symptom of the disease. In addition, the activity of these deficient substantia nigra neurons can be partially restored by administering to the patient a precursor of dopamine, their normal neurotransmitter. Just prior to administration of the dopamine precursor, when the motor cortex activity is weakened (in the OFF period), the patient presents a severe akinesia, whereas shortly afterwards (in the ON period), a more normal motor activity is restored. Boulenger et al. took advantage of this condition in a group of Parkinson patients in comparison with a group of healthy subjects. They used the priming paradigm for measuring lexical decisions for different types of visually presented words: concrete nouns (e.g. mill), action verbs (e.g. draw), or non-words (strings of consonants). One word was briefly presented as a prime, followed by the target word. The patient’s task was to decide (by pressing a key) whether the target word was a real word or a non-word. There was a striking difference in response times of the same patients in the OFF or in the ON periods, respectively. In the ON period, patients with Parkinson’s disease responded like healthy subjects for both action words and concrete nouns, i.e. their lexical decision was facilitated when both action words and concrete nouns were preceded by a prime of the same category. On the contrary, during the OFF period, priming was observed only for the concrete nouns, not for the action words. In other terms, the status of the motor system of the patients determined their ability to recognize words referring to actions.

This is not to say that patients with Parkinson’s disease in the OFF period have lost the ability to understand the meaning of words like draw or pick. Their conceptual knowledge about these words is presumably intact, and they should be able to retrieve the correct dictionary definitions from their semantic memory. What they are missing is the covert meaning of action words, i.e. the meaning that would give them access to the procedure of the corresponding action and, ultimately, would allow them to perform that action efficiently. Indeed, patients with Parkinson’s disease experience great difficulties in using their procedural knowledge: they are poor in motor learning, particularly for learning motor sequences; they have difficulties rehearsing motor procedures when they have to simulate an action.Reference Dominey and Jeannerod20

Perspectives

The data we have described about words indexing sensory modalities and action words can be tentatively generalized to other semantic categories. The hypothesis that part of language comprehension is grounded in sensory and motor mechanisms postulates that language pertains to a system for manipulating knowledge. In that perspective, words can be considered as tools for retrieving, using and communicating our own knowledge about aspects of the external world: our conceptual knowledge about concrete objects or abstract concepts, our procedural knowledge about physical forces and interactions during reaching goals. There are other types of knowledge as well, such as affective knowledge about emotions, feelings, states of mind in general, as experienced by ourselves and by others. Would it be possible to extend the notion of language grounding in brain structures to these semantic categories? If this should reveal itself possible, neuroscience could link up with other disciplines, such as dynamic psychology. This would be an opportunity for a fully interdisciplinary approach to language functions.

Marc Jeannerod is Emeritus professor at the University Claude Bernard, Lyon, France. He is a member of the Académie des Sciences. His main scientific interests are in the field of motor cognition, at the interface between cognitive neuroscience, psychology and psychiatry.

References

1.Freud, S. (1891) Zur Auffassung der Aphasien. Eine kritische Studie (Leipzig und Wien: Frantz Deuticke).Google Scholar
2.Meyer, D. E. and Schwaneveldt, R. W. (1976) Meaning, memory structure and mental processes. Science, 192, 27633.CrossRefGoogle ScholarPubMed
3.Warrington, E. K. and Shallice, T. (1984) Category specific semantic impairments. Brain, 107, 829853.CrossRefGoogle ScholarPubMed
4.Goldberg, R. F., Perfetti, C. A. and Schneider, W. (2006) Perceptual knowledge retrieval activates sensory brain regions. Journal of Neuroscience, 26, 49174921.CrossRefGoogle ScholarPubMed
5.Hauk, O., Johnsrude, I. and Pulvermüller, F. (2004) Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41, 301307.CrossRefGoogle ScholarPubMed
6.Milner, B., Corkin, S. and Teuber, H. L. (1968) Further analysis of the hippocampal amnesic syndrome: 14-years follow-up of HM. Neuropsychologia, 6, 215234.CrossRefGoogle Scholar
7.Frak, V. G., Paulignan, Y. and Jeannerod, M. (2001) Orientation of the opposition axis in mentally simulated grasping. Experimental Brain Research, 136, 120127.CrossRefGoogle ScholarPubMed
8.Ehrsson, H., Geyer, S. and Naito, E. (2003) Imagery of voluntary movements of fingers, toes and tongue activates corresponding body-part specific motor representations. Journal of Neurophysiology, 90, 33043316.CrossRefGoogle ScholarPubMed
9.Rizzolatti, G., Fadiga, L., Gallese, V. and Fogassi, L. (1996) Premotor cortex and the recognition of motor action. Cognitive Brain Research, 3, 131141.CrossRefGoogle Scholar
10.Rizzolatti, G. and Fabbri-Destro, M. (2007) Understanding actions and the intentions of others: the basic neural mechanism. European Review, 15, 209222.CrossRefGoogle Scholar
11.Martin, A., Wiggs, C. L., Ungerleider, L. G. and Haxby, J. V. (1996) Neural correlates of category-specific knowledge. Nature, 379, 649652.CrossRefGoogle ScholarPubMed
12.Chao, L. L. and Martin, A. (2000) Representation of manipulable man-made objects in the dorsal stream. NeuroImage, 12, 478494.CrossRefGoogle ScholarPubMed
13.Fadiga, L., Craighero, L., Buccino, G. and Rizzolatti, G. (2002) Speech listening specifically modulates the excitability of tongue muscles: a TMS study. European Journal of Neuroscience, 15, 399402.CrossRefGoogle ScholarPubMed
14.Kerzel, D. and Bekkering, H. (2000) Motor activation from visible speech: evidence from stimulus-response compatibility. Journal of Experimental Psychology. Human Perception and Performance, 26, 634647.CrossRefGoogle ScholarPubMed
15.Schubotz, R. J. and von Cramon, Y. (2004) Sequences of abstract non-biological stimuli share ventral premotor cortex with action observation and imagery. Journal of Neuroscience, 24, 54675474.CrossRefGoogle Scholar
16.Grafton, S. T., Arbib, M. A., Fadiga, L. and Rizzolatti, G. (1996) Localization of grasp representations in humans by positron emission tomography. 2. Observation compared with imagination. Experimental Brain Research, 112, 103111.CrossRefGoogle ScholarPubMed
17.Perani, D., Cappa, S., Bettinardi, V., Bressi, S., Gorno-Tempini, M., Matarrese, M. and Fazio, F. (1995) Different neural systems for the recognition of animals and man-made tools. NeuroReport, 6, 16371641.CrossRefGoogle ScholarPubMed
18.Tettamenti, M., Buccino, G., Saccuman, M. C., Gallese, V., Danna, M., Scifo, P., Fazio, F., Rizzolatti, G., Cappa, S. F. and Perani, D. (2005) Listening to action related sentences activates fronto-parietal motor circuits. Journal of Cognitive Neuroscience, 17, 273281.CrossRefGoogle Scholar
19.Boulenger, V., Mechtouff, L., Thobois, S., Broussolle, E., Jeannerod, M. and Nazir, T. A. (2008) Word processing in Parkinson’s disease is impaired for action verbs but not for concrete nouns. Neuropsychologia, 46, 743756.CrossRefGoogle Scholar
20.Dominey, P. F. and Jeannerod, M. (1997) Contribution of frontostriatal function to sequence learning in Parkinson’s disease. Evidence for dissociable systems. NeuroReport, 8, 39.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 A classical diagram after Jean-Martin Charcot. In one of his Tuesday lectures at the Hopital de la Salpétrière (Paris), Charcot presented this diagram for explaining how the sound or the sight of a bell, as well as the auditory or visual perception of the corresponding word (cloche) was processed in different sensory systems. Once recognized, the bell or the word cloche was transferred into motor outputs for speaking or writing.