Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-11T14:09:56.407Z Has data issue: false hasContentIssue false

Look both ways before crossing the street: Perspectives on the intersection of bimodality and bilingualism

Published online by Cambridge University Press:  09 July 2015

BENJAMIN ANIBLE*
Affiliation:
Department of Linguistics, University of New Mexico, USA
JILL P. MORFORD
Affiliation:
Department of Linguistics, University of New Mexico, USA
*
Address for correspondence: Benjamin Anible, Department of Linguistics, MSC03 2130, Albuquerque, NM 87131banible@unm.edu
Rights & Permissions [Opens in a new window]

Extract

In 1939, NYU Professor of German, Murat Roberts warned readers about the potentially harmful effects of societal bilingualism: “When two languages come to be spoken by the same society for the same purposes, both of these languages are certain to deteriorate. The sense of conflict disturbs in both of them the basis of articulation, deranges the procedure of grammar, and imperils the integrity of thought. The representation of the mind is divided into incongruous halves; and the average speaker, being no linguistic expert, finds it difficult to keep the two media apart. Confusion follows. The contours of language grow dim as the two systems collide and intermingle” (23). Roberts’ warnings about the threat of bilingualism are a thin cloak over the assumption that monolingualism is the norm. But even without dire predictions, conceptions of language representation and use derived from the study of bilinguals have been slow to enter the mainstream. The relation between language representation and control (Abutalebi & Green, 2007) and the dynamic nature of grammatical knowledge across the lifespan (Linck, Kroll & Sunderman, 2009) should change the way we conceptualize all language processing, whether monolingual, bilingual or multilingual.

Type
Peer Commentaries
Copyright
Copyright © Cambridge University Press 2015 

In 1939, NYU Professor of German, Murat Roberts warned readers about the potentially harmful effects of societal bilingualism: “When two languages come to be spoken by the same society for the same purposes, both of these languages are certain to deteriorate. The sense of conflict disturbs in both of them the basis of articulation, deranges the procedure of grammar, and imperils the integrity of thought. The representation of the mind is divided into incongruous halves; and the average speaker, being no linguistic expert, finds it difficult to keep the two media apart. Confusion follows. The contours of language grow dim as the two systems collide and intermingle” (23). Roberts’ warnings about the threat of bilingualism are a thin cloak over the assumption that monolingualism is the norm. But even without dire predictions, conceptions of language representation and use derived from the study of bilinguals have been slow to enter the mainstream. The relation between language representation and control (Abutalebi & Green, Reference Abutalebi and Green2007) and the dynamic nature of grammatical knowledge across the lifespan (Linck, Kroll & Sunderman, Reference Linck, Kroll and Sunderman2009) should change the way we conceptualize all language processing, whether monolingual, bilingual or multilingual.

Emmorey, Giezen & Gollan (Emmorey, Giezen & Gollan) contribute to growing evidence that bilinguals are spared derangement, and in some respects even benefit from their knowledge of multiple languages. Further, these authors remind us once again to check our assumptions. The recent upsurge of research on bilingualism is situated primarily on just one corner of the intersection of bilingualism and bimodality – most studies of bilinguals to date explore the interaction of two spoken languages. Emmorey and colleagues highlight the potential that research ‘on the other side of the street’ has to reveal about the cognitive architecture of this crossroads.

One of several unanticipated results to emerge from the study of bimodal bilinguals is that activating two languages is not always costly in terms of cognitive resources. Emmorey, Petrich & Gollan (Reference Emmorey, Petrich and Gollan2012), for example, found that bimodal bilinguals named pictures with ASL-English code-blends just as quickly as in ASL alone, and further, that code-blending was helpful to participants trying to retrieve low frequency ASL signs. These and related findings are a reminder that competition and inhibition in bilingual language processing is a more general property of how the mind manages diverse representations and not a property unique to bilingualism per se.

What is less clear is the degree to which the visual nature of signed languages is shaping the pattern of effects found for hearing bimodal bilinguals. Cognitive Linguistic (Croft & Cruse, Reference Croft and Cruse2004) and Grounded Cognition (Barsalou, Reference Barsalou2008) theories reject the notion of amodal symbolic representations. While we don't dispute that the bimodality of bimodal bilinguals impacts their language processing, some phenomena might be explained better in terms of the modality, and specifically the visuality, of signed languages. Signed languages provide the ideal articulatory landscape for anchoring embodied cognitive construals (Dudis, Reference Dudis2004; Janzen, Reference Janzen, Kristiansen, Achard, Dirven and de Mendoza Ibáñez2006; P. Wilcox, Reference Wilcox2000, Reference Wilcox2004; S. Wilcox, Reference Wilcox2004). The fact that dual lexical retrieval is cost-free could reflect a processing advantage for linguistic forms that are richly embedded in their cognitive construals as much as a lack of inhibition between signed and spoken forms. As another example, consider the translation direction asymmetry effect. Hearing unimodal interpreters prefer to interpret from their L2 into their L1 (Seleskovich, Reference Seleskovich1978) – a pattern that can be partially explained by the Revised Hierarchical Model (RHM) prediction that L2-L1 translation typically engages direct lexical links, while L1-L2 translation relies on conceptual mediation (Kroll & Stewart, Reference Kroll and Stewart1994). Why then is it common for bimodal bilingual interpreters to report a preference to interpret into their L2 (Nicodemus & Emmorey, Reference Nicodemus and Emmorey2013)? Nicodemus & Emmorey outline several sensible performance factors that may contribute to the reversed direction asymmetry in bimodal bilinguals, and most recently suggest a disconnect between novice interpreters’ perception and actual performance during L1-L2 interpretation (Nicodemus & Emmorey, in press). Could it be that this preference is also motivated by the interrelatedness of form and meaning in signed languages? Translating from L2 Spanish arbol to L1 English tree may be easy to complete through direct lexical links, but foregoing conceptual mediation when viewing the ASL sign for TREE may be nearly impossible. A surprising result from Baus, Carreiras & Emmorey (Reference Baus, Carreiras and Emmorey2013) would be consistent with this interpretation. They found that highly proficient bimodal bilinguals were slower to translate iconic than non-iconic ASL signs into English. Bimodality alone cannot account for this effect.

Kitty-corner to the hearing signing bilinguals that are the focus of the Emmorey et al. (Emmorey et al.) review are deaf signing bilinguals, who provide another reason to be cautious with our assumptions about bimodality. Deaf bilinguals who use two signed languages are clearly not ‘bimodal’ bilinguals. Even deaf signing bilinguals who are proficient in a spoken language may not feel comfortable under the ‘bimodal’ mantle. Multi-modal representations in deaf signing bilinguals engage vision, movement, spatial schemas and affect, but are less likely to rely on auditory experiences. So the unique patterns of code-blending and dual lexical retrieval observed for ‘bimodal bilinguals’ hold little relevance for this corner of bilingual experience. Deaf bilinguals exhibit some widely documented bilingual behaviors, such as cross-language activation during lexical access of printed words (Kubuş, Villwock, Morford & Rathmann, published online January 27, 2014; Morford, Wilkinson, Villwock, Piñar & Kroll, Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011; Ormel, Hermans, Knoors & Verhoeven, Reference Ormel, Hermans, Knoors and Verhoeven2012) and signs (Hosemann, Altvater-Mackensen, Herrmann & Mani, Reference Hosemann, Altvater-Mackensen, Herrmann and Mani2013) as well as sensitivity to implicit L2 language patterns (Anible, Twitchell, Waters, Dussias, Piñar & Morford, published online April 1, 2015). However, without understanding how deaf bilinguals use and manage multiple languages, we cannot hope to distinguish effects of general cognitive processing (phonological forms grounded in the same sensory-motor systems compete for activation and engage inhibition and control processes) from effects of bimodality (phonological forms grounded in different sensory-motor systems become integrated into multi-modal representations).

In one of the only studies to date investigating bilingual processing in deaf unimodal bilinguals, Adam (Reference Adam2013) asked deaf Irish Sign Language (ISL)–British Sign Language (BSL) bilinguals to name pictures in ISL and BSL in order to investigate language switching costs. Signers were slower to name pictures on switch than on stay trials, but switch costs were not asymmetric – with greater switch costs when switching into the dominant language – as has been reported for hearing unimodal bilinguals (Costa & Santesteban, Reference Costa and Santesteban2004). These results are particularly interesting given Emmorey, Petrich and Gollan's (Reference Emmorey, Petrich and Gollan2014) finding that hearing bimodal bilinguals incur no processing cost for releasing a language from inhibition. While inhibiting the dominant language may be particularly burdensome in hearing unimodal bilinguals, it is possible that high levels of lexical/conceptual overlap between signed languages are at least partially responsible for the lack of L1-L2/L2-L1 asymmetry in switch trials for signed language unimodal bilinguals. Unrelated signed languages use similar forms to express related concepts at a rate higher than chance (Padden, Hwang, Lepic & Seegers, Reference Padden, Hwang, Lepic and Seegers2015), and Adam (Reference Adam2013) reports that a “pseudo-cognate” subset in his data had shorter response latencies in both signed languages, indicating facilitation of picture naming times for shared conceptual construals.

Are effects of bimodality really effects of ‘bimodality’ or are they effects of richly detailed embodied representations? There is substantial evidence that simulation mechanisms are activated by linguistic input (Barsalou, Reference Barsalou2008). A phrase such as “The ranger saw the eagle in the sky,” evokes an image of extended wings causing participants to name a picture of an eagle with wings extended faster than with wings folded (Zwaan & Madden, Reference Zwaan, Madden, Pecher and Zwaan2005). Equivalent effects are evoked in hearing and deaf signers with a single lexical item (Grote & Linz, Reference Grote, Linz, Müller and Fischer2003; Thompson, Vinson & Vigliocco, Reference Thompson, Vinson and Vigliocco2009). But differing results across tasks contribute to ongoing debate about the effects of modality/visuality on signed language processing (Anible, Occhino-Kehoe & Kammann, Reference Anible, Occhino-Kehoe and Kammann2013; Baus et al., Reference Baus, Carreiras and Emmorey2013; Bosworth & Emmorey, 2010; Emmorey, Reference Emmorey2014; Occhino-Kehoe, Anible, Wilkinson & Morford, Reference Occhino-Kehoe, Anible, Wilkinson and Morford2015; Thompson, Emmorey & Gollan, Reference Thompson, Emmorey and Gollan2005; Thompson, Vinson & Vigliocco, Reference Thompson, Vinson and Vigliocco2010). Regardless of where the discussion eventually leads and what theoretical framework it engenders, we would be wise to step back for a moment and take a look around to see what we may have missed just across the street, or maybe even kitty-corner.

References

Abutalebi, J., & Green, D. (2007). Bilingual language production: The neurocognition of language representation and control. Journal of Neurolinguistics, 20, 242275.Google Scholar
Adam, R. (2013). Cognate facilitation and switching costs in unimodal bilingualism: British Sign Language and Irish Sign Language. Poster presented at Theoretical Issues in Sign Language Research (TISLR) 11, London, England.Google Scholar
Anible, B., Occhino-Kehoe, C., & Kammann, J. (2013). The interface of phonology and semantics in ASL: An online-processing study. Presented at Theoretical Issues in Sign Language Research (TISLR) 11, London, England.Google Scholar
Anible, B., Twitchell, P., Waters, G. S., Dussias, P. E., Piñar, P., & Morford, J. P. (Published online April 1, 2015). Sensitivity to verb bias in American Sign Language-English bilinguals. The Journal of Deaf Studies and Deaf Education, doi:10.1093/deafed/env007.Google Scholar
Barsalou, L.W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617645.Google Scholar
Baus, C., Carreiras, M., & Emmorey, K. (2013). When does iconicity in sign language matter? Language and Cognitive Processes, 28, 261271.Google Scholar
Costa, A., & Santesteban, M. (2004). Lexical access in bilingual speech production: Evidence from language switching in highly proficient bilinguals and L2 learners. Journal of Memory and Language, 50, 491511.Google Scholar
Croft, W., & Cruse, D. A. (2004). Cognitive linguistics. Cambridge: Cambridge University Press.Google Scholar
Dudis, P. (2004). Body partitioning and real-space blends. Cognitive Linguistics, 15, 223238.CrossRefGoogle Scholar
Emmorey, K. (2014). Iconicity as structure mapping. Philosophical Transactions of the Royal Society B, 369, 20130301.Google Scholar
Emmorey, K., Giezen, M. R., & Gollan, T. H. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Bilingualism: Language and Cognition. doi: 10.1017/S1366728915000085.CrossRefGoogle Scholar
Emmorey, K., Petrich, J. A. F., & Gollan, T. H. (2012). Bilingual processing of ASL–English code-blends: The consequences of accessing two lexical representations simultaneously. Journal of Memory and Language, 67, 199210.Google Scholar
Emmorey, K., Petrich, J.A.F., Gollan, T.H. (2014). Cost free switches but not where you expect them: Evidence from ASL-English bilinguals. Presented at the 54th Annual Meeting of the Psychonomic Society, Toronto.Google Scholar
Grote, K., & Linz, E. (2003). The influence of sign language iconicity on semantic conceptualization. In Müller, W. G., & Fischer, O. (Eds.), From sign to signing, pp. 2340. Amsterdam: John Benjamins.Google Scholar
Hosemann, J., Altvater-Mackensen, N., Herrmann, A., & Mani, N. (2013). Cross-modal language activation. Does processing a sign (L1) also activate its corresponding written translations (L2)? Presented at the 11th Theoretical Issues in Sign Language Research Conference, London.Google Scholar
Janzen, T. (2006). Visual communication: Signed language and cognition. In Kristiansen, G., Achard, M., Dirven, R., & de Mendoza Ibáñez, F. J. Ruiz (Eds.), Cognitive Linguistics: Current Applications and Future Perspectives, pp. 359377. Berlin/New York: Mouton de Gruyter.CrossRefGoogle Scholar
Kroll, J.F., & Stewart, E. (1994). Category interference in translation and picture naming: Evidence for asymmetric connections between bilingual memory representations. Journal of Memory and Language, 33, 149174.Google Scholar
Kubuş, O., Villwock, A., Morford, J. P., & Rathmann, C. (Published online January 27, 2014). Word recognition in deaf readers: Cross-language activation of German Sign Language and German. Applied Psycholinguistics, doi:10.1017/S0142716413000520.Google Scholar
Linck, J. A., Kroll, J. F., & Sunderman, G. (2009). Losing access to the native language while immersed in a second language: Evidence for the role of inhibition in second language learning. Psychological Science, 20, 15071515.Google Scholar
Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll, J. F. (2011). When deaf signers read English: Do written words activate their sign translations? Cognition, 118, 286292.Google Scholar
Nicodemus, B., & Emmorey, K. (in press). Directionality in ASL-English interpreting: Quality and accuracy in L1 and L2. Interpreting, 17.Google Scholar
Nicodemus, B., & Emmorey, K. (2013). Direction asymmetries in spoken and signed language interpreting. Bilingualism: Language and Cognition, 16, 624636.Google Scholar
Occhino-Kehoe, C., Anible, B., Wilkinson, E., & Morford, J. P. (2015). Iconicity is in the eye of the beholder: How language experience affects perceived iconicity. Unpublished manuscript.Google Scholar
Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2012). Cross-language effects in written word recognition: The case of bilingual deaf children. Bilingualism: Language and Cognition, 15, 288303.Google Scholar
Padden, C., Hwang, S. O., Lepic, R., & Seegers, S. (2015). Tools for language: Patterned iconicity in sign language nouns and verbs. Topics in Cognitive Science, 7, 8194.CrossRefGoogle ScholarPubMed
Roberts, M. H. (1939). The problem of the hybrid language. The Journal of English and Germanic Philology, 38, 2341.Google Scholar
Seleskovich, D. (1978). Interpreting for international conferences. Pen and Booth: Washington, D.C. Google Scholar
Thompson, R. L., Emmorey, K., & Gollan, T. (2005). Tip-of-the-fingers experiences by ASL signers: Insights into the organization of a sign-based lexicon. Psychological Science, 16, 856860.Google Scholar
Thompson, R. L., Vinson, D. P., & Vigliocco, G. (2009). The link between form and meaning in American Sign Language: Lexical processing effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 550557.Google Scholar
Thompson, R. L., Vinson, D. P., & Vigliocco, G. (2010). The link between form and meaning in British Sign Language: Effects of iconicity for phonological decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36, 10171027.Google Scholar
Wilcox, P. P. (2000). Metaphor in American Sign Language. Washington, DC: Gallaudet University Press.Google Scholar
Wilcox, P. P. (2004). A cognitive key: Metonymic and metaphorical mappings in ASL. Cognitive Linguistics, 15, 197222.Google Scholar
Wilcox, S. (2004). Cognitive iconicity: Conceptual spaces, meaning, and gesture in signed languages. Cognitive Linguistics, 15, 119147.Google Scholar
Zwaan, R. A., & Madden, C. J. (2005). Embodied sentence comprehension. In Pecher, D., & Zwaan, R. (Eds.), Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thought, pp. 224–45. New York: Cambridge Univ. Press CrossRefGoogle Scholar