Emmorey, Giezen and Gollan (Emmorey, Giezen & Gollan) address the fascinating question of what can be learnt about language, cognition and the brain from the unique group of people who have grown up learning both a signed and a spoken language. The focus of their review is hearing individuals – referred to as hearing bimodal bilinguals. The review presents an excellent overview of research in this field and highlights the unique insights that this population can provide.
However, the review leaves open an important question: to what extent can research with hearing bimodal bilinguals inform our understanding of the consequences of sign–speech bilingualism in deaf people? The answer is probably less than we would wish.
The first issue to consider is one of terminology. Emmorey et al. (Emmorey et al.) refer to deaf bilinguals as ‘deaf bimodal bilinguals’ just as the term is applied to ‘hearing bimodal bilinguals’. However, the use of the term ‘bimodal’ is confusing. Hearing individuals who are bilingual in sign and speech indeed access their languages via two different modalities (here meaning senses): primarily auditory for speech and visual for sign. For those born deaf, access to both languages is through the visual modality. Thus, the application of the term ‘bimodal’ to this group appears misleading. For the sake of clarity in the field it is perhaps more appropriate to refer to these individuals as ‘deaf unimodal sign–speech bilinguals’ or for precision ‘deaf sign language and spoken/written language bilinguals’.
The use of these terms may be rejected by some in the field who prefer the term sign–print bilinguals (Piñar, Dussias & Morford, Reference Piñar, Dussias and Morford2011; Kubus, Villwock, Morford & Rathmann, Reference Kubus, Villwock, Morford and Rathmann2014). This makes the strong assumption that written text exists in isolation of what it actually represents – speech. Yet there is ample evidence that deaf signers make use of elements that are recognisable as being derived from spoken language.
As Emmorey et al. (Emmorey et al.) comment in the final section of their paper “. . . mouthings from spoken language words . . . are often produced silently and simultaneously with signs.” The mouthings referred to are mouth actions (silent – not whispered) produced by deaf signers which represent words from the surrounding dominant spoken language. For the most part the semantics of the mouthing and sign are the same (Bank, Crasborn & van Hout, Reference Bank, Crasborn and van Hout2011). This phenomenon has been observed and studied in a large number of sign languages, beginning with the work of Vogt-Svendsen on Norwegian Sign Language (Vogt-Svendsen, Reference Vogt-Svendsen1981; Reference Vogt-Svendsen, Boyes-Braem and Sutton-Spence2001), and including studies of non-Western sign languages such as Adamorobe Sign Language (Ghana) (Nyst, Reference Nyst2007), and Inuit Sign Language (Schuit, Reference Schuit, Zeshan and de Vos2013) as well as studies of British Sign Language (Sutton-Spence & Day, Reference Sutton-Spence, Day, Boyes-Braem and Sutton-Spence2001; Sutton-Spence, Reference Sutton-Spence, Vermeerbergen, Leeson and Crasborn2007), Irish Sign Language (Mohr, Reference Mohr2012), German Sign Language (Hohenberger & Happ, Reference Hohenberger, Happ, Boyes-Braem and Sutton-Spence2001), and Sign Language of the Netherlands (Schermer, Reference Schermer1990). Although there has been very little study of mouthing in ASL, Nadolske & Rosenstock (Reference Nadolske, Rosenstock, Perniss, Pfau and Steinbach2007) have demonstrated that the use of mouthing in ASL is comparable to that found in other sign languages. Indeed, the only sign language which has been reported not to make use of mouthings is Kata Kolok, a sign language used by a village community on Bali (de Vos & Zeshan, Reference de Vos, Zeshan, Zeshan and de Vos2012).
There is ongoing debate about the linguistic status of mouthings, with many studies describing mouthings as part of the sign language lexicon – i.e., the lexical representations of signs includes both oral and manual information (Vogt-Svendsen, Reference Vogt-Svendsen, Boyes-Braem and Sutton-Spence2001; van de Sande & Crasborn, Reference van de Sande and Crasborn2009); other researchers have argued the opposite: mouthings and signs are represented and accessed independently and reflect knowledge of two languages (Ebbinghaus & Hessmann, Reference Ebbinghaus, Hessmann, Boyes-Braem and Sutton-Spence2001; Vinson, Thompson, Skinner, Fox & Vigliocco, Reference Vinson, Thompson, Skinner, Fox and Vigliocco2010). Using fMRI with deaf native signers we have reported that BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with nonspeech-like mouth actions showed more activation in posterior and inferior temporal regions (Capek et al., Reference Capek, Woll, MacSweeney, Waters, David, McGuire, Brammer and Campbell2008). Thus, the brain does appear to care about the status of mouthings used in signed languages. This finding suggests that consideration of mouthings, vis a vis code-blends, is crucial to any discussion of bilingualism in a signed and spoken language, especially in relation to deaf signers.
Evidence for the influence of elements of speech on signed languages also comes from fingerspelling. The pattern of ‘reductions’ in fingerspelling by skilled signers indicate that they do not reduce fingerspellings randomly or arbitrarily. Rather their reductions reflect speech rather than simply orthography. For example, in reducing the fingerspelling of the name CHARLES, this is more likely to be reduced to -C-H- than to -C- (Sutton-Spence, Reference Sutton-Spence1994).
Research which considers deaf individuals as bilinguals is much rarer than studies of their hearing siblings, which reflects the size of the different populations. An additional factor influencing the difficulty of conducting research in this field is, as Emmorey et al. (Emmorey et al.) point out, the great variability in spoken (and signed) language proficiency within the deaf population, in terms of spoken language comprehension (lipreading) (e.g., Mohammed, Campbell, MacSweeney, Barry & Coleman, Reference Mohammed, Campbell, MacSweeney, Barry and Coleman2006) and production, and indeed in accessing a spoken language via text – reading (e.g., Mayberry, del Giudice & Lieberman, Reference Mayberry, Giudice and Lieberman2011). However, this variability should not be a reason to ignore the role of spoken language when considering deaf sign–speech bilinguals.
Although the literature is mixed, at least some studies with deaf adults and children indicate a role for speech phonology when deaf people read text (see Mayberry et al., Reference Mayberry, Giudice and Lieberman2011). This variability suggests that there may be many routes to successful reading for a deaf person. That some deaf people do not appear to make use of speech phonology in their skilled reading is not, we argue, cause to ignore the role that awareness of speech structure may play in the development of reading skills or indeed the enduring role it may play in reading and reading-related skills for some deaf people (e.g., MacSweeney et al., Reference MacSweeney, Brammer, Waters and Goswami2009; Emmorey, Weisberg, McCullough, Petrich & Emmorey, Reference Emmorey, Weisberg, McCullough and Petrich2013).
Whilst studies of hearing bimodal bilinguals can provide great insights into language, cognition and the brain, future research which considers their deaf siblings as ‘sign language–spoken language’ bilinguals may prove even richer.
Emmorey, Giezen and Gollan (Emmorey, Giezen & Gollan) address the fascinating question of what can be learnt about language, cognition and the brain from the unique group of people who have grown up learning both a signed and a spoken language. The focus of their review is hearing individuals – referred to as hearing bimodal bilinguals. The review presents an excellent overview of research in this field and highlights the unique insights that this population can provide.
However, the review leaves open an important question: to what extent can research with hearing bimodal bilinguals inform our understanding of the consequences of sign–speech bilingualism in deaf people? The answer is probably less than we would wish.
The first issue to consider is one of terminology. Emmorey et al. (Emmorey et al.) refer to deaf bilinguals as ‘deaf bimodal bilinguals’ just as the term is applied to ‘hearing bimodal bilinguals’. However, the use of the term ‘bimodal’ is confusing. Hearing individuals who are bilingual in sign and speech indeed access their languages via two different modalities (here meaning senses): primarily auditory for speech and visual for sign. For those born deaf, access to both languages is through the visual modality. Thus, the application of the term ‘bimodal’ to this group appears misleading. For the sake of clarity in the field it is perhaps more appropriate to refer to these individuals as ‘deaf unimodal sign–speech bilinguals’ or for precision ‘deaf sign language and spoken/written language bilinguals’.
The use of these terms may be rejected by some in the field who prefer the term sign–print bilinguals (Piñar, Dussias & Morford, Reference Piñar, Dussias and Morford2011; Kubus, Villwock, Morford & Rathmann, Reference Kubus, Villwock, Morford and Rathmann2014). This makes the strong assumption that written text exists in isolation of what it actually represents – speech. Yet there is ample evidence that deaf signers make use of elements that are recognisable as being derived from spoken language.
As Emmorey et al. (Emmorey et al.) comment in the final section of their paper “. . . mouthings from spoken language words . . . are often produced silently and simultaneously with signs.” The mouthings referred to are mouth actions (silent – not whispered) produced by deaf signers which represent words from the surrounding dominant spoken language. For the most part the semantics of the mouthing and sign are the same (Bank, Crasborn & van Hout, Reference Bank, Crasborn and van Hout2011). This phenomenon has been observed and studied in a large number of sign languages, beginning with the work of Vogt-Svendsen on Norwegian Sign Language (Vogt-Svendsen, Reference Vogt-Svendsen1981; Reference Vogt-Svendsen, Boyes-Braem and Sutton-Spence2001), and including studies of non-Western sign languages such as Adamorobe Sign Language (Ghana) (Nyst, Reference Nyst2007), and Inuit Sign Language (Schuit, Reference Schuit, Zeshan and de Vos2013) as well as studies of British Sign Language (Sutton-Spence & Day, Reference Sutton-Spence, Day, Boyes-Braem and Sutton-Spence2001; Sutton-Spence, Reference Sutton-Spence, Vermeerbergen, Leeson and Crasborn2007), Irish Sign Language (Mohr, Reference Mohr2012), German Sign Language (Hohenberger & Happ, Reference Hohenberger, Happ, Boyes-Braem and Sutton-Spence2001), and Sign Language of the Netherlands (Schermer, Reference Schermer1990). Although there has been very little study of mouthing in ASL, Nadolske & Rosenstock (Reference Nadolske, Rosenstock, Perniss, Pfau and Steinbach2007) have demonstrated that the use of mouthing in ASL is comparable to that found in other sign languages. Indeed, the only sign language which has been reported not to make use of mouthings is Kata Kolok, a sign language used by a village community on Bali (de Vos & Zeshan, Reference de Vos, Zeshan, Zeshan and de Vos2012).
There is ongoing debate about the linguistic status of mouthings, with many studies describing mouthings as part of the sign language lexicon – i.e., the lexical representations of signs includes both oral and manual information (Vogt-Svendsen, Reference Vogt-Svendsen, Boyes-Braem and Sutton-Spence2001; van de Sande & Crasborn, Reference van de Sande and Crasborn2009); other researchers have argued the opposite: mouthings and signs are represented and accessed independently and reflect knowledge of two languages (Ebbinghaus & Hessmann, Reference Ebbinghaus, Hessmann, Boyes-Braem and Sutton-Spence2001; Vinson, Thompson, Skinner, Fox & Vigliocco, Reference Vinson, Thompson, Skinner, Fox and Vigliocco2010). Using fMRI with deaf native signers we have reported that BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with nonspeech-like mouth actions showed more activation in posterior and inferior temporal regions (Capek et al., Reference Capek, Woll, MacSweeney, Waters, David, McGuire, Brammer and Campbell2008). Thus, the brain does appear to care about the status of mouthings used in signed languages. This finding suggests that consideration of mouthings, vis a vis code-blends, is crucial to any discussion of bilingualism in a signed and spoken language, especially in relation to deaf signers.
Evidence for the influence of elements of speech on signed languages also comes from fingerspelling. The pattern of ‘reductions’ in fingerspelling by skilled signers indicate that they do not reduce fingerspellings randomly or arbitrarily. Rather their reductions reflect speech rather than simply orthography. For example, in reducing the fingerspelling of the name CHARLES, this is more likely to be reduced to -C-H- than to -C- (Sutton-Spence, Reference Sutton-Spence1994).
Research which considers deaf individuals as bilinguals is much rarer than studies of their hearing siblings, which reflects the size of the different populations. An additional factor influencing the difficulty of conducting research in this field is, as Emmorey et al. (Emmorey et al.) point out, the great variability in spoken (and signed) language proficiency within the deaf population, in terms of spoken language comprehension (lipreading) (e.g., Mohammed, Campbell, MacSweeney, Barry & Coleman, Reference Mohammed, Campbell, MacSweeney, Barry and Coleman2006) and production, and indeed in accessing a spoken language via text – reading (e.g., Mayberry, del Giudice & Lieberman, Reference Mayberry, Giudice and Lieberman2011). However, this variability should not be a reason to ignore the role of spoken language when considering deaf sign–speech bilinguals.
Although the literature is mixed, at least some studies with deaf adults and children indicate a role for speech phonology when deaf people read text (see Mayberry et al., Reference Mayberry, Giudice and Lieberman2011). This variability suggests that there may be many routes to successful reading for a deaf person. That some deaf people do not appear to make use of speech phonology in their skilled reading is not, we argue, cause to ignore the role that awareness of speech structure may play in the development of reading skills or indeed the enduring role it may play in reading and reading-related skills for some deaf people (e.g., MacSweeney et al., Reference MacSweeney, Brammer, Waters and Goswami2009; Emmorey, Weisberg, McCullough, Petrich & Emmorey, Reference Emmorey, Weisberg, McCullough and Petrich2013).
Whilst studies of hearing bimodal bilinguals can provide great insights into language, cognition and the brain, future research which considers their deaf siblings as ‘sign language–spoken language’ bilinguals may prove even richer.