Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T06:59:29.715Z Has data issue: false hasContentIssue false

The Action-Sentence Compatibility Effect in ASL: the role of semantics vs. perception*

Published online by Cambridge University Press:  21 November 2014

KRISTEN SECORA*
Affiliation:
San Diego State University, and University of California San Diego
KAREN EMMOREY
Affiliation:
San Diego State University
*
Address for correspondence: Kristen Secora, Lab for Language and Cognitive Neuroscience, 6495 Alvarado Road, Suite 200, San Diego, CA 92120. tel: (619) 594-8049; fax: (619) 594-8056; e-mail: ksecora@ucsd.edu
Rights & Permissions [Opens in a new window]

Abstract

Embodied theories of cognition propose that humans use sensorimotor systems in processing language. The Action-Sentence Compatibility Effect (ACE) refers to the finding that motor responses are facilitated after comprehending sentences that imply movement in the same direction. In sign languages there is a potential conflict between sensorimotor systems and linguistic semantics: movement away from the signer is perceived as motion toward the comprehender. We examined whether perceptual processing of sign movement or verb semantics modulate the ACE. Deaf ASL signers performed a semantic judgment task while viewing signed sentences expressing toward or away motion. We found a significant congruency effect relative to the verb’s semantics rather than to the perceived motion. This result indicates that (a) the motor system is involved in the comprehension of a visual–manual language, and (b) motor simulations for sign language are modulated by verb semantics rather than by the perceived visual motion of the hands.

Type
Research Article
Copyright
Copyright © UK Cognitive Linguistics Association 2014 

1. Introduction

Embodied theories of cognition suggest that the functioning of the mind is inextricably bound to how the body interacts with the world. For example, cognition can be grounded in bodily experiences through the formation of multimodal (i.e., sensory and motor) representations of events or actions. These representations can be reactivated as mental simulations when thinking about such events or actions (e.g., Barsalou, Reference Barsalou1999, Reference Barsalou2008). Sensorimotor simulations are thought to work immediately and without conscious thought (Barsalou, Reference Barsalou2008). Many studies have now shown that spoken and written language comprehension can activate perceptual and motor simulations (e.g., Borghi, Glenberg, & Kaschak, Reference Borghi, Glenberg and Kaschak2004; Borreggine & Kaschak, Reference Borreggine and Kaschak2006; Glenberg & Kaschak, Reference Glenberg and Kaschak2002; Kaschak & Borreggine, Reference Kaschak and Borreggine2008; Kaschak et al., Reference Kaschak, Madden, Therriault, Yaxley, Aveyard, Blanchard and Zwaan2005; Kaschak, Zwaan, Aveyard, & Yaxley, Reference Kaschak, Zwaan, Aveyard and Yaxley2006; Scorolli & Borghi, Reference Scorolli and Borghi2007; Yaxley & Zwaan, Reference Yaxley and Zwaan2007; Zwaan, Madden, Yaxley, & Aveyard, Reference Zwaan, Madden, Yaxley and Aveyard2004; Zwaan & Taylor, Reference Zwaan and Taylor2006). While this phenomenon appears to be robust and replicable in different spoken languages, motor system involvement in the comprehension of sign languages has been less studied. The role of mental simulation of movement may differ for languages that use movement itself to linguistically express motion.

Evidence from compatibility effects has suggested motor and visual brain areas can be activated during language comprehension. Compatible (or congruent) responses between two modalities (e.g., auditory language and a visual stimulus) result in faster reaction times than when the responses are incompatible. For example, participants responded faster to spoken sentences expressing rotation in one direction while simultaneously viewing a spinning visual percept in a congruent direction (Kaschak et al., Reference Kaschak, Madden, Therriault, Yaxley, Aveyard, Blanchard and Zwaan2005). Motor facilitation following language comprehension has been called an Action-Sentence Compatibility Effect (ACE; Glenberg & Kaschak, Reference Glenberg and Kaschak2002). Comprehending sentences denoting direction-specific movement (i.e., a ‘toward’ movement as in “Courtney handed you the notebook”) facilitated motor responses that required arm movement in the same direction as that implied by the linguistic stimulus (i.e., ‘toward’ arm movement; Glenberg & Kaschak, Reference Glenberg and Kaschak2002). Evidence from this and other compatibility effects suggests that simulations can encode precise information relevant to the linguistic content, such as rotational direction (e.g., Zwaan & Taylor, Reference Zwaan and Taylor2006), handshape (e.g., Bergen & Wheeler, Reference Bergen and Wheeler2005; Klatzky, Pellegrino, McCloskey, & Doherty, Reference Klatzky, Pellegrino, McCloskey and Doherty1989), and direction of movement (e.g., Glenberg & Kaschak, Reference Glenberg and Kaschak2002).

Many studies have examined the role of the motor system during the execution of motor simulations (e.g., Borreggine & Kaschak, Reference Borreggine and Kaschak2006; Glenberg & Kaschak, Reference Glenberg and Kaschak2002), but the role of movement perception has been less well studied. Further, it remains unclear whether perceptual processing of linguistic movement plays a role in constructing mental simulations. Specifically, it is not clear whether perceptual systems or semantic systems guide the linguistic simulation in cases when conflicting information is received from lower-level perceptual systems and higher-order semantic systems. The potentially different roles of semantic and perceptual systems in the construction of mental simulations cannot be determined solely by studies of spoken language because spoken languages do not express linguistic information via highly visible movements. Compatibility effects with spoken languages attest to the role of semantics in the execution of motor simulations, but these results cannot provide evidence regarding the role of linguistic movement in the execution of such simulations. In order to investigate this question, languages in the visual–gestural modality must be examined.

Sign languages, as visuospatial languages, make use of the body (hands, face, and arms) as the articulators that convey linguistic information. Because movement is a phonological parameter of the language – itself potentially conveying meaning – there is the possibility of conflict between the movement that the comprehender perceives and the movement that is conveyed by the semantics of the sentence being signed. For example, if the signer expresses the sentence “You open the drawer”, the comprehender views the signer’s hands moving away from his or her body (‘away’ perceptually), while the semantics of the sentence mean that the drawer is actually being moved toward the comprehender (‘toward’ semantically; see Figure 1). Therefore, because of the modality, sign languages allow examination of how a perceptual and semantic conflict can affect execution of mental simulations. It is likely that this perceptual and semantic conflict is true for most (if not all) sign languages including American Sign Language (ASL).

Fig. 1. Potential perceptual and semantic conflict in the ASL verb OPEN-DRAWER. A) The verb OPEN-DRAWER as viewed by the participant on a computer screen during the experiment. B) Cartoon depicting the direction that the comprehender understands the movement of the verb to mean, i.e., movement toward one’s own body (therefore the semantic direction is toward). C) Cartoon depicting the direction that the comprehender views the signer’s hands moving with reference to the comprehender’s own body (therefore the perceptual direction is away).

One way in which perceiving movement could affect the ACE is via bottom-up interference from the visually perceived signed signal. If viewing movement prevents sensorimotor activation from the linguistic stimuli while constructing the motor simulation, the ACE should not be found when signers comprehend signed sentences depicting directional motion events. Sign language is perceived visually, and hand and arm movements are part of the visual signal that must be parsed to recognize signs (e.g., movement is a required phonological feature of signs). In order to comprehend the signed signal, the brain is required to process the visual movements of the signers’ hands and arms. Previous research suggests that when the visual system is engaged by a stimulus, it is less available to be used in constructing a language-induced simulation (Kaschak et al., Reference Kaschak, Madden, Therriault, Yaxley, Aveyard, Blanchard and Zwaan2005; Meteyard, Zokaei, Bahrami, & Vigliocco, Reference Meteyard, Zokaei, Bahrami and Vigliocco2008). For example, Kaschak and colleagues (2005) had participants view simple visual illusions depicting motion either toward or away from the participant while simultaneously listening to sentences expressing movement either toward or away from the body. Sensibility judgements to the sentences were faster when the visual percept was incongruent with the physical arm response executed by the participant to make their response. This result suggests that the priming effect hypothesized to be responsible for the ACE is interfered with by concurrently viewing a non-linguistic moving stimulus (Kaschak et al., Reference Kaschak, Madden, Therriault, Yaxley, Aveyard, Blanchard and Zwaan2005). Similarly, Meteyard et al. (Reference Meteyard, Zokaei, Bahrami and Vigliocco2008) showed that simple moving dot displays (depicting either upward or downward motion) superimposed over a word that denoted either upward or downward motion (e.g., ‘rise’ or ‘fall’) also resulted in a response advantage for incongruent responses. Thus, there is evidence that low-level perceptual processes can interfere with the simulation executed during comprehension of linguistic stimuli, creating a ‘mismatch advantage’ rather than the canonical congruency effect (or ‘match advantage’) seen in the ACE. If processing perceptual movement affects the ACE, then mental simulations might be constructed relative to the perceptual movement direction rather than the movement direction implied by the semantics.

If, however, the simulation is constructed only with respect to the direction encoded by the semantics of the verb, there should be significant facilitation when the semantic direction matches the response direction. This pattern is similar to what is canonically found with the ACE for spoken languages, as opposed to the mismatch advantage outlined above. For example, the semantics of a sentence such as “You open the drawer” potentially activates features in the mental simulation associated with the ‘toward’ movement and not the perceived ‘away’ movement as the signer’s hands move away from the comprehender (who is facing the signer). In this case, mental simulations would be constructed relative to the direction implied by the semantics of the sentence, irrespective of the movement of the signer’s hands in articulating the sentence. Thus, if the ACE is modulated by semantics alone and not affected by the movement that is perceptually visible, then there should be no effect of congruency when the perceptual direction matches the response direction. We utilized a sensibility judgement task with ASL sentences to examine the ACE in ASL and to assess the potentially different roles of perceptual processing of movement and comprehension of semantically implied movement in the construction of mental simulations.

2. Method

2.1. participants

Forty-two deaf ASL users (24 female; mean age = 34 years, SD = 11) were recruited from the southern California community and participated in the experiment at San Diego State University. Thirty were native signers (exposed to ASL from birth) and twelve were early signers (i.e., they learned sign language prior to age 8).

2.2. materials

The stimuli consisted of signed sentences denoting physical movement either toward or away from the body (e.g., “You throw a ball” or “You put on glasses”). All the sentences involved the participant, with the sign model producing the sign YOU (a point directly toward the addressee/camera) to indicate that the participant was involved in the action. The verb was always produced last in the signed sentence (e.g., the order of the signs would be: BALL YOU THROW). There were eighty critical stimuli: forty sentences depicted a movement semantically away from the subject (e.g., “You throw a ball”) and the other forty depicted a movement semantically toward the subject (e.g., “You put on glasses”). Equal numbers of sentences contained only ‘you’ as the agent (1-person sentences, e.g., “You put on glasses”) and both ‘you’ and the signer (‘me’) as agent and recipient of the action (2-person sentences; e.g., “You give me the coffee cup”). Two-person sentences express the same direction of movement both semantically and perceptually (Figure 2A, B), but 1-person sentences express opposite directions of movement semantically and perceptually (Figure 2C, D). The ‘me’ in the 2-person sentences always referred to the sign model on a computer screen directly across from the comprehender, who sat facing the computer screen. This set-up resulted in straight-path forward or backward movement of the signer’s hand.

Fig. 2. Semantic and perceptual movement in sample 1-person and 2-person sentences. A) This 2-person sentence expresses the same movement semantically as well as perceptually: away from the comprehender (‘you’ in the sentence). B) The movement perceived by the comprehender in the verb is also away from the comprehender’s body. C) In this 1-person sentence, the semantics of the sentence expresses movement toward the body. D) The movement of the verb as perceived by the comprehender, however, is an away movement relative to the comprehender’s own body.

The original Glenberg and Kaschak (Reference Glenberg and Kaschak2002) stimuli included a third person noun as the recipient of the action (e.g., “You handed Courtney the notebook”); however, directly translating this construction into ASL would result in signer hand movements that are diagonal to the body (i.e., toward a side location associated with the non-present referent ‘Courtney’), rather than movements directly toward or away from the signer’s body. Since the goal of the current study was to examine the role of perceiving movement directly toward and away from the signer, we did not include any third person referent nouns. Rather, the agent and recipient of the action were always either the signer (the signer points to herself) or the comprehender (signer points toward the addressee/camera).

Six native ASL signers translated the ASL sentences into English to ensure they were being comprehended as intended. Six of the sentences containing both ‘you’ and ‘me’ were not translated consistently among the native signers (one with away movement and five with toward movement). These 2-person sentences were sometimes translated as 1-person sentences because the recipient was left unspecified, e.g., the 2-person target sentence “I throw a paper airplane to you”, was translated as the 1-person sentence “I throw a paper airplane”. Therefore, 2-person sentences that were translated as 1-person sentences by three or more of the six signers were not included in the final analysis.

Finally, eighty nonsensical sentences were constructed using similar content and structures as the critical sentences (e.g., “You ask the apple tree”). There were thirty-two nonsense sentences depicting away movement, thirty-three depicting toward movement, and fifteen with neutral, non-directional movement (e.g., “You stir a foreign language”).

2.3. procedure

Participants viewed digital video clips of ASL sentences presented on a computer screen and indicated whether the sentence made sense by pressing one of two keys on the keyboard. The keyboard was turned 90° from the normal orientation such that the long axis extended toward the participant. To begin viewing the sentence, the participant pressed and held the h key in the center of the keyboard. When they were ready to make their response, they lifted their finger from the h key and moved it to press the intended key at the edge of the keyboard (either apostrophe [’] or a). After each response, the participant again pressed the h key, which initiated the presentation of the next sentence video. Videos disappeared from the screen as soon as the h key was released even if the sentence had not been completed. Participants viewed one of two pseudo-randomized lists of sentences. Half of the participants responded ‘yes’ with a movement away from their body, while the other half responded ‘yes’ with a movement toward their body.

The dependent variable was release time (RT), recorded from the time the center h key was pressed until it was released, and thus time spent viewing the sentence was included in the raw measure of release time. To account for differences in sentence duration between stimuli, the duration of each sentence was subtracted from the release time for that stimulus (the mean sentence duration was 2439 ms; SD = 592 ms). Therefore, a negative RT indicates a release before the end of the sentence, and a positive RT represents how long after the end of the sentence participants waited before they lifted their finger to respond. The majority (95%) of RTs were positive, indicating that the sensibility judgement was initiated after viewing the critical verb, which was always the last sign in the sentence.

Responses were coded as either ‘congruent’ or ‘incongruent’ depending on the direction of the subject’s response and direction of the stimulus. When the response direction matched the direction of the stimulus, the trial was labeled ‘congruent’, and when the response direction and the direction of the stimulus did not match, the trial was labeled ‘incongruent’. Since we are interested in whether the ACE is affected by the direction implied by the semantics or whether it is affected by the direction of the movement the comprehender perceives, each stimulus was classified as congruent or incongruent twice – based on the direction implied by the semantics (‘semantic analysis’) and based on the perceived motion of the stimulus (‘perceptual analysis’). The potential conflict between semantics and perception is illustrated with a stimulus such as “You open the drawer” (see Figure 1 and Figure 2C, D). The implied direction of semantics of the sentence is a movement toward the participant (‘toward’ semantically), but the direction the comprehender actually views when watching the sign model express this concept is movement away from the participant’s own body (‘away’ perceptually).

3. Results

Only correct critical sentences were analyzed. Overall, participants had high accuracy (94% of responses were correct). Release times shorter than 250 ms were eliminated (too short for the participant to have actually viewed the sentence), and RTs greater than 5000 ms were also removed (0.3% of the data). In addition, release times that were ±2 SDs from each participant’s mean were removed from the data (4.6% of the data). Mean reaction times for each condition in the semantic analysis and the perceptual analysis are presented in Table 1.

table 1. Mean reaction times (ms) for the semantic and perceptual analyses for congruent and incongruent response directions

note: Standard errors are indicated in parentheses.

A (2) response direction (away, toward) × (2) congruency with semantics (congruent, incongruent) mixed ANOVA was conducted. Analyses were done by participants (F1) and by items (F2)AQ2: (F1) and by items (F2): Can you please that the “1” and “2” here should not be subscripts?. Response direction was a between-subjects factor, and congruency was a within-subjects factor. The main effect of congruency was significant (congruent faster than incongruent responses; F1(1,40) = 4.257, p = .046, ${\rm{\eta }}_p^2 $ = .096, F2(1,73) = 6.631, p = .012, ${\rm{\eta }}_p^2 $ = .083). There was no effect of response direction (F1(1,40) = 0.076, p = .784, ${\rm{\eta }}_p^2 $ = .002)Footnote 1 and no interaction between response direction and congruency (F1(1,40) = 1.463, p = .234, ${\rm{\eta }}_p^2 $ = .035). When the stimuli were analyzed relative to the movement perceived by the comprehender, there was no significant effect of congruency (F1(1,40) = 0.059, ${\rm{\eta }}_p^2 $ = .001, F2(1,73) = 0.316, ${\rm{\eta }}_p^2 $ = .004). Note that the perceptual movement analysis is not simply the opposite of the semantic analysis because 1-person and 2-person sentences differ with respect to semantic and perceptual movement. Although 1-person sentences contain opposite movement directions for the semantic and perceptual analyses, 2-person sentences express the same direction of movement for the both analyses (see Figure 2). Thus, the direction of 2-person sentences does not change when coded with respect to either semantic or perceptual movement, but 1-person sentences express different directions of movement when coded with respect to either semantic or perceptual movement. Given that the semantic and perceptual analyses are only dissociable for 1-person sentences, we analyzed these sentences separately. A congruency effect was observed for the semantic analysis, but the effect was marginal due to reduced power (F1(1,40) = 3.224, p = .080, ${\rm{\eta }}_p^2 $ = .075, F2(1,39) = 4.177, p = .048, ${\rm{\eta }}_p^2 $ = .097). Nonetheless, this finding suggests that when perceptual and semantic analyses are directly pitted against each other, only the semantic analysis yields a stimulus–response compatibility effect.

In summary, the ACE was only observed when responses were analyzed with respect to the semantics of the movement. In other words, the ACE was found only for the movement direction implied by the semantics of the sentence, rather than for the movement direction that the comprehender perceived.

4. Discussion

We examined the Action-Sentence Compatibility Effect in ASL to determine whether perceptually processing linguistic movement affects execution of motor simulations. We found the ACE in ASL was present only with respect to the direction of the movement conveyed by the semantics of the verbs in the signed sentences, and it was not affected by the direction of movement the comprehender perceived in the signs themselves. The presence of the ACE in a signed language suggests that the mechanisms for comprehending movement-related language are similar regardless of modality. Simulations seem to be driven by the semantics encoded by the language, and when direction of the perceptual movement conflicts with the semantic direction, the simulation follows the semantics.

In a 2005 publication in the Proceedings of the Cognitive Science Society, Tseng and Bergen also reported finding an ACE in ASL using a sign matching task. Signers (with varying proficiency in ASL) and non-signers were asked to indicate whether two brief videos showed the same ASL sign or not. A significant ACE (i.e., faster RTs when response direction matched the directional movement of the signs) was found only for those participants who knew ASL and only when the direction of movement conveyed meaning, as in the signs THROW or EAT. No ACE was observed when movement direction was simply phonological, as in the signs GIRL or MISTAKE. These results indicate that familiarity with ASL is critical for motor facilitation to occur and that the ACE requires access to semantics. However, there was considerable variability in the types of movement contained in the signs, and in which signs were assigned to the three movement categories used in this study: metaphorical movement as in YESTERDAY, ‘concrete’ semantic movement, and phonological movement. This variability occurred in part because the stimuli were limited to items that were available from the Michigan State University online ASL dictionary. Additionally, the videos from this site are relatively low quality, and the videos were presented to the participants as 4 still frames of a single sign combined to create a 450 ms video for each sign. In an attempt to replicate the Tseng and Bergen (Reference Tseng and Bergen2005) findings with more controlled stimuli and more fluent signers, we presented twenty deaf native and early signers (acquired ASL before age 8) and twenty-nine hearing non-signers with high-quality full video recordings of signs (mean length = 593 ms at 29 frames per second). To reduce the variability in stimuli, we selected signs that minimized movement other than forward/backward movement (e.g., diagonal movement), ensured the type of movement in the concrete movement condition was semantically literal (rather than metaphorical), and made sure the signs presented were common variants. The procedure was the same as in Tseng and Bergen (Reference Tseng and Bergen2005), including the same sign matching task and the same three categories of movement stimuli. However, we failed to find a significant ACE in any condition. This null finding suggests that the sign matching task may be inconsistent with respect to whether the semantic system is automatically recruited to perform the task, particularly when the task is not perceptually challenging (i.e., identifying signs from the full video is much easier than from 4 video frames). The sentence sensibility judgement task used in the current study requires semantic processing, but it cannot be performed by non-signers.

Recent neuroimaging data also support the idea that processing perceptual movement within the linguistic signal does not affect the top-down modulation by linguistic semantics of activation in motion-sensitive visual areas (MT+; McCullough, Saygin, Korpics, & Emmorey, Reference McCullough, Saygin, Korpics and Emmorey2012). In this fMRI study, native signers viewed ASL motion sentences that described an event with a high amount of motion (e.g., “The deer walked along the hillside”), or static sentences that described an event with little or no movement (e.g., “The deer slept along the hillside”). The amount of perceived sign movement in the videos did not differ for the two sentence conditions. McCullough et al. (Reference McCullough, Saygin, Korpics and Emmorey2012) found that neural activity in MT+ was greater for motion sentences than for static sentences, suggesting that MT+ is not only involved in processing the visually perceived movements of signs but is also sensitive to the motion semantics of those signs. These results are consistent with our findings and support the hypothesis that perceptual processing of sign movement does not block the mental simulation of motion features when comprehending motion events expressed in sign language.

While the current study and Tseng and Bergen (Reference Tseng and Bergen2005) found evidence of the ACE for ASL, Perniss, Vinson, Fox, and Vigliocco (Reference Perniss, Vinson, Fox and Vigliocco2013) failed to find an ACE for sentence processing in BSL (British Sign Language). However, they did find an ACE when the same participants read English sentences. There are several potential reasons that Perniss et al. (Reference Perniss, Vinson, Fox and Vigliocco2013) were unable to detect the ACE. First, the BSL stimuli were exact translations of Glenberg and Kaschak’s (Reference Glenberg and Kaschak2002) original English sentences, whereas our stimuli were created de novo by a deaf native ASL signer. It is possible that sentences created in sign language itself may feel more natural in ASL (or may be easier to understand) than signed sentences created through English translation. Second, the dependent variable in the Perniss et al. (Reference Perniss, Vinson, Fox and Vigliocco2013) study was total release time, which included the duration of the BSL sentence, whereas, in the current study, sentence length was subtracted from the total RT. It is possible that not taking sentence durations into account made it harder to detect the subtle effects of response congruency. Another possibility is that with only sixteen deaf participants, the Perniss et al. (Reference Perniss, Vinson, Fox and Vigliocco2013) study did not have enough power to detect an ACE for sign language, under the assumption that the ACE may be more variable for signed than for written sentences.

The observed ACE effect in ASL suggests that when comprehenders view a signer executing linguistic motor movements, they simulate the action expressed by the signer as if they themselves were performing the motor action (i.e., from the signer’s point of view) and not from the visual perspective from which they actually see the movements. By convention, spatial descriptions in ASL are also interpreted from the signer’s point of view, as well as in many other signed languages (Pyers, Perniss, & Emmorey, Reference Pyers, Perniss and Emmorey2008). An interesting consequence of this conventionalization is that comprehenders must ‘ignore’ the perceived relative location of the signer’s hands and may engage in motor embodiment (imagining his/her body in the signer’s location) in order to understand the spatial description (Pyers, Perniss, & Emmorey, unpublished observations). For example, to indicate that a bed is on the left (as one enters a room), an ASL signer would position a classifier handshape for a flat rectangular object (a ‘B’ handshape, palm down) to his/her left. However, the comprehender who is canonically facing the signer perceives the position of the bed (the classifier handshape) on his/her right. To understand ASL spatial descriptions, comprehenders must interpret the description from the signer’s perspective, not from their own view of the signer’s hands. Thus, sign language comprehenders may routinely engage in motor embodiment when comprehending locative, motion, and action events.

Similarly, speakers’ gestures that accompany verbal descriptions might be interpreted using similar embodied mechanisms. Skipper, Goldin-Meadow, Nusbaum, and Small (Reference Skipper, Goldin-Meadow, Nusbaum and Small2009) suggest that comprehenders use their knowledge of how to produce hand and arm movements to “extract semantic information from the [gesturer’s] hands” (p. 661). They found brain areas that are involved in production and perception of hand and arm movements are strongly connected to language areas during comprehension of descriptions that included meaningful co-speech gestures. Thus, comprehenders seem to simulate the action of the gesture as if they themselves were performing the motor action, similar to what we have suggested for sign comprehenders. Furthermore, comprehenders’ brains seem to differentiate between meaningful co-speech gestures and similar-looking self-adapting movements (e.g., touching hair, adjusting clothing; Skipper, Goldin-Meadow, Nusbaum, & Small, Reference Skipper, Goldin-Meadow, Nusbaum and Small2007). Simply perceiving movement does not seem to be enough to involve language comprehension brain areas. Rather, the movements are interpreted based on their semantic communicative content. Together these findings further support the idea that semantic processing drives the link between sensorimotor areas and language comprehension areas during the interpretation of both co-speech gestures and signed sentences depicting concrete movement.

While this study provided evidence for the presence of the ACE in ASL, the precise nature of this effect in a visual language still remains unexplored. Future research examining the compatibility effect when verb position is varied would indicate whether the timing of simulation is tied specifically to the verb, or if it is relative to the end of the sentence. Additional research should also examine whether simulation is restricted to sentences that directly involve the comprehender as a participant, or if it also occurs when sentences involve a third party (as in “He is giving Courtney the notebook”). It is also unclear what effect the specific perspective taken by the comprehender has on mental simulations. That is, are mental simulations altered depending on whether the comprehender simulates from an egocentric perspective, versus adopting the perspective of the signer? Understanding the specific effects of language modality and perspective taking on mental simulations will expand our understanding of the processes involved in language comprehension and how motor embodiment relates to these processes.

Footnotes

*

This work was supported by a grant from the National Institute on Deafness and Other Communicative Disorders (NIDCD) (RO1 DC010997) awarded to Karen Emmorey and San Diego State University. Kristen Secora was supported by a training grant from NIDCD (T32 DC0007361). We thank the members of the Laboratory for Language and Cognitive Neuroscience at SDSU for their help with the study, Michael Secora for help with the illustrations, and all of the deaf participants who made this research possible.

1 The F2 analysis cannot contain response direction as a factor because direction is identical to congruency for each item.

References

references

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22(04), 577660.Google Scholar
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617645.Google Scholar
Bergen, B., & Wheeler, K. (2005). Sentence understanding engages motor processes. Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society, 238243.Google Scholar
Borghi, A. M., Glenberg, A. M., & Kaschak, M. P. (2004). Putting words in perspective. Memory & Cognition, 32(6), 863873.Google Scholar
Borreggine, K. L., & Kaschak, M. P. (2006). The Action-Sentence Compatibility Effect: it’s all in the timing. Cognitive Science, 30, 10971112.Google Scholar
Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9(3), 558565.CrossRefGoogle ScholarPubMed
Kaschak, M. P., & Borreggine, K. L. (2008). Temporal dynamics of the action-sentence compatibility effect. Quarterly Journal of Experimental Psychology, 61(6), 883895.Google Scholar
Kaschak, M. P., Madden, C. J., Therriault, D. J., Yaxley, R. H., Aveyard, M., Blanchard, A. A., & Zwaan, R. A. (2005). Perception of motion affects language processing. Cognition, 94, B79B89.Google Scholar
Kaschak, M. P., Zwaan, R. A., Aveyard, M., & Yaxley, R. H. (2006). Perception of auditory motion affects language processing. Cognitive Science, 30, 733744.CrossRefGoogle ScholarPubMed
Klatzky, R. L., Pellegrino, J. W., McCloskey, B. P., & Doherty, S. (1989). Can you squeeze a tomato? The role of motor representations in semantic sensibility judgments. Journal of Memory and Language, 28(1), 5677.Google Scholar
McCullough, S., Saygin, A. P., Korpics, F., & Emmorey, K. (2012). Motion-sensitive cortex and motion semantics in American Sign Language. NeuroImage, 63, 111118.Google Scholar
Meteyard, L., Zokaei, N., Bahrami, B., & Vigliocco, G. (2008). Visual motion interferes with lexical decision on motion words. Current Biology, 18(17), R732R733.Google Scholar
Perniss, P., Vinson, D., Fox, N., & Vigliocco, G. (2013). Comprehending with the body: action compatibility in sign language?Proceedings of the Thirty-Fifth Annual Conference of the Cognitive Science Society, 11331138.Google Scholar
Pyers, J., Perniss, P., & Emmorey, K. (2008). Viewpoint in the visual-spatial modality. Paper presented at the 30th annual convention of the German Society of Linguistics workshop on Gestures: A comparison of signed and spoken languages.Google Scholar
Scorolli, C., & Borghi, A. M. (2007). Sentence comprehension and action: effector specific modulation of the motor system. Brain Research, 1130, 119124.Google Scholar
Skipper, J. I., Goldin-Meadow, S., Nusbaum, H. C., & Small, S. L. (2007). Speech-associated gestures, Broca’s area, and the human mirror system. Brain and Language, 101(3), 260277.Google Scholar
Skipper, J. I., Goldin-Meadow, S., Nusbaum, H. C., & Small, S. L. (2009). Gestures orchestrate brain networks for language understanding. Current Biology, 19(8), 661667.Google Scholar
Tseng, M., & Bergen, B. (2005). Lexical processing drives motor simulation. Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society, 22062211.Google Scholar
Yaxley, R. H., & Zwaan, R. A. (2007). Simulating visibility during language comprehension. Cognition, 105, 229236.Google Scholar
Zwaan, R. A., Madden, C. J., Yaxley, R. H., & Aveyard, M. E. (2004). Moving words: dynamic representations in language comprehension. Cognitive Science, 28, 611619.Google Scholar
Zwaan, R. A., & Taylor, L. J. (2006). Seeing, acting, understanding: motor resonance in language comprehension. Journal of Experimental Psychology, 135(1), 111.Google Scholar
Figure 0

Fig. 1. Potential perceptual and semantic conflict in the ASL verb OPEN-DRAWER. A) The verb OPEN-DRAWER as viewed by the participant on a computer screen during the experiment. B) Cartoon depicting the direction that the comprehender understands the movement of the verb to mean, i.e., movement toward one’s own body (therefore the semantic direction is toward). C) Cartoon depicting the direction that the comprehender views the signer’s hands moving with reference to the comprehender’s own body (therefore the perceptual direction is away).

Figure 1

Fig. 2. Semantic and perceptual movement in sample 1-person and 2-person sentences. A) This 2-person sentence expresses the same movement semantically as well as perceptually: away from the comprehender (‘you’ in the sentence). B) The movement perceived by the comprehender in the verb is also away from the comprehender’s body. C) In this 1-person sentence, the semantics of the sentence expresses movement toward the body. D) The movement of the verb as perceived by the comprehender, however, is an away movement relative to the comprehender’s own body.

Figure 2

table 1. Mean reaction times (ms) for the semantic and perceptual analyses for congruent and incongruent response directions