Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-06T05:10:56.489Z Has data issue: false hasContentIssue false

The time course of cross-language activation in deaf ASL–English bilinguals*

Published online by Cambridge University Press:  21 October 2015

JILL P. MORFORD*
Affiliation:
Department of Linguistics, University of New Mexico, USA NSF Science of Learning Center on Visual Language and Visual Learning (VL2)
CORRINE OCCHINO-KEHOE
Affiliation:
Department of Linguistics, University of New Mexico, USA NSF Science of Learning Center on Visual Language and Visual Learning (VL2)
PILAR PIÑAR
Affiliation:
Department of World Languages and Cultures, Gallaudet University, USA NSF Science of Learning Center on Visual Language and Visual Learning (VL2)
ERIN WILKINSON
Affiliation:
Department of Linguistics, University of Manitoba, Canada NSF Science of Learning Center on Visual Language and Visual Learning (VL2)
JUDITH F. KROLL
Affiliation:
Department of Psychology, Pennsylvania State University, USA NSF Science of Learning Center on Visual Language and Visual Learning (VL2)
*
Address for correspondence: Jill P. Morford, Department of Linguistics, MSC03 2130, University of New Mexico, Albuquerque, NM 87131-0001, USAmorford@unm.edu
Rights & Permissions [Opens in a new window]

Abstract

What is the time course of cross-language activation in deaf sign–print bilinguals? Prior studies demonstrating cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf ASL–English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further, the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second language. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological forms when given ample time for strategic or conscious translation across their two languages.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2015 

One of our teenagers recently walked in from a hard workout and remarked off-hand, “Hey, Mom, when you say you have sore muscles in German, you’re really saying your muscles have a hangover.” Muskelkater in German is indeed a compound word, consisting of the base forms for muscle and hangover. This reflection by a German–English bilingual belies the complex relationship between the two languages of a bilingual. Generally, bilinguals are not conscious of words from their two languages competing for recognition in a monolingual context, but converging evidence indicates that bilingual lexical processing is typically language non-selective (Dijkstra & Van Heuven, Reference Dijkstra and van Heuven2002; Jared & Kroll, Reference Jared and Kroll2001; Marian & Spivey, Reference Marian and Spivey2003; Thierry & Wu, Reference Thierry and Wu2007; Van Wijnendaele & Brysbaert, Reference Van Wijnendaele and Brysbaert2002). In other words, whether speaking or listening, reading or writing, bilinguals activate word forms in both languages. However, the degree of activation of words across the two languages depends on both lexical form and meaning similarities. Cognates, or words that have similar form and meaning, as in Muskel and muscle, show more robust patterns of cross-language activation than translation equivalents that don't share a similar form, such as Kater and hangover (DeGroot & Nas, Reference De Groot and Nas1991; Lemhöfer, Dijkstra, Schriefers, Baayen, Grainger & Zwitserlood, Reference Lemhöfer, Dijkstra, Schriefers, Baayen, Grainger and Zwitserlood2008).

Evidence for non-selective activation comes from a variety of sources. For the purposes of this article, we focus on studies that investigate second language (L2) visual word recognition in adult bilinguals. In one of the earliest studies to investigate non-selective access, Nas (Reference Nas1983) asked Dutch–English bilinguals to perform a lexical decision task in their second language, English. Participants were slower and made more errors rejecting non-words if they were pseudohomophones of Dutch words. In this particular instance, the non-target language was activated due to shared orthographic and phonological lexical forms across the two languages. In other studies, translation equivalents sharing little form similarity, such as Dutch paard and English horse, can also be activated cross-linguistically (De Groot & Nas, Reference De Groot and Nas1991), but these effects disappear in a masked priming protocol. The most robust effects of cross-language activation are found when both form and meaning align in the two languages, as in the case of cognates. Lemhöfer and colleagues (Reference Lemhöfer, Dijkstra, Schriefers, Baayen, Grainger and Zwitserlood2008) compared within-language and cross-language effects on English word recognition in a progressive demasking paradigm with native speakers of French, German and Dutch. Of the cross-language factors, they found that cognate status but not L1 orthographic neighborhood (number of high- and low-frequency neighbors, total number and summed frequency of neighbors) accounted for significant amounts of variation in response time.

Several theoretical models have been proposed to represent the structure of the bilingual lexicon and to make predictions about how bilinguals process and produce language. These models are, for the most part, based on bilinguals who use two spoken languages and built on the assumption that there is form overlap between the two languages of a bilingual. A more stringent test of whether cross-language activation is typical of bilingual language processing can be made by investigating bilinguals whose languages have little or no form similarity because they are produced in two different modalities, as is the case for spoken and signed languages. While signed languages have phonological formsFootnote 1 , they do not share sensory or motor expressions with spoken language phonological forms. Writing systems for signed languages are not widespread, so orthographic similarities between signed and spoken languages are also lacking. Several recent studies have found that cross-language activation occurs even in the absence of phonological or orthographic overlap in the signed and spoken languages used by hearing and deaf signing bilinguals (Emmorey, Borinstein, Thompson & Gollan, Reference Emmorey, Borinstein, Thompson and Gollan2008; Kubuş, Villwock, Morford & Rathmann, Reference Kubuş, Villwock, Morford and Rathmann2015; Morford, Kroll, Piñar & Wilkinson, Reference Morford, Kroll, Piñar and Wilkinson2014; Morford, Wilkinson, Villwock, Piñar & Kroll, Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011; Ormel, Hermans, Knoors & Verhoeven, Reference Ormel, Hermans, Knoors and Verhoeven2012; Shook & Marian, Reference Shook and Marian2012). In light of this new evidence, current bilingual models need to be re-evaluated. In this article, we address how orthographic word forms activate phonological word forms cross-lingually in bilinguals whose languages have no phonological or orthographic overlap because one of their languages is a written language and the other one a signed language. Specifically, we investigate how the time course of cross-language activation in these bilinguals can inform current bilingual lexical processing models.

We have coined the term sign–print bilinguals to refer to deaf bilinguals who use a signed language as the primary language for face-to-face communication, and the written form of a spoken language for reading and writing (Morford et al., Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011, Reference Morford, Kroll, Piñar and Wilkinson2014; Piñar, Dussias & Morford, Reference Piñar, Dussias and Morford2011). This term highlights several unique characteristics about this population. First, it distinguishes deaf signing bilinguals from hearing signing bilinguals, referred to as bimodal bilinguals (Bishop & Hicks, Reference Bishop and Hicks2005; Johnson, Watkins & Rice, Reference Johnson, Watkins and Rice1992). The term bimodal emphasizes the relationship of auditory-vocal and visual-manual representations, which is critical to understanding how hearing bilinguals relate their knowledge of speech and signs, but is less relevant for deaf bilinguals. Second, the term sign–print bilinguals emphasizes language-dominance in the signing modality, by including the reference to sign in the first position. The use of the term print as opposed to spoken language is selected to highlight the fact that the visual form of the spoken language is often the primary access to the second language (Hoffmeister & Caldwell-Harris, Reference Hoffmeister and Caldwell-Harris2014; Kuntze, Reference Kuntze2004; Supalla, Wix & McKee, Reference Supalla, Wix, McKee, Nicol and Langendoen2001). As with any bilingual population, sign–print bilinguals are heterogeneous with respect to the specific age of first exposure to and fluency attained in both the dominant and non-dominant languages, and language dominance is likely to fluctuate across the lifespan. The term sign–print bilinguals is not intended to rule out the possibility of effects of residual auditory experience; further, sign–print bilinguals may rely on knowledge of articulatory and even acoustic patterns in the spoken language. In sum, this term is selected to highlight important characteristics of the population under study, but not to make theoretical claims that these characteristics are the only factors that influence language representation and use for deaf bilingual signers who have knowledge of a spoken language.

For sign–print bilinguals, evidence of non-selective access comes from tasks demonstrating activation of phonological forms in a signed language while participants are processing orthographic word forms from a spoken language. Even in a monolingual written task that can be completed without reference to the participant's signed language, bilingual signers are influenced by the phonological form of the sign language translation equivalents of the written word stimuli. The relationship between print and sign cannot be explained by mappings between shared or overlapping phonological or orthographic systems, as is the case for hearing bilinguals who know two spoken languages. Thus, this result raises important questions about what precisely is activated when deaf bilinguals read written words. More specifically, how are written forms from a spoken language related to sign language knowledge?

If we take hearing bilinguals as a starting point for predicting the relationship between print and signs, it would be logical to expect orthographic word forms to activate the associated sub-lexical and lexical phonological forms of the spoken language, since the orthography is designed to capture these phonological regularities. Subsequently, those phonological forms, particularly the lexical phonological forms, would activate lexical phonological forms in the signed language through lateral connections, as well as through top-down activation from shared semantics. This path of activation would be consistent with the BIA+ model of bilingual lexical access (Dijkstra & Van Heuven, Reference Dijkstra and van Heuven2002; see Figure 1). If this model is correct, then deaf bilinguals, like hearing bilinguals, should be somewhat slower than monolinguals during lexical processing tasks (Bijeljac Babic, Biardeau & Grainger, Reference Bijeljac Babic, Biardeau and Grainger1997), in part due to the additional processing costs incurred by activation of the non-target language. Further, Dijkstra & Van Heuven (Reference Dijkstra and van Heuven2002: 183–4) predict that “an absence of cross-linguistic phonological and semantic effects for different words could occur if task demands allow responding to faster codes (for instance, orthographic L1 codes), giving slower codes no chance to affect the response times.” In other words, tasks allowing faster processing of the target stimulus should be less likely to show an influence of cross-language activation than tasks requiring more protracted processing.

Figure 1. Three alternative models of bilingual word recognition in deaf sign-print bilinguals based on the BIA+ model (Dijkstra & Van Heuven, Reference Dijkstra and van Heuven2002)

One of the unique characteristics of sign–print bilinguals concerns the relationship between orthographic and phonological representations. Orthographic representations are typically restricted to the spoken language while phonological representations are much richer for the signed language. Further, the L2 orthography will not have a regular and predictable mapping to the L1 signed language phonology, particularly at a sublexical level. Traditional studies of reading in the deaf population have assumed that orthographic forms activate spoken phonological forms just as in hearing readers (e.g., Colin, Magnan, Ecalle & Leybaert, Reference Colin, Magnan, Ecalle and Leybaert2007; Leybaert, Reference Leybaert, Marschark and Clark1993; Perfetti & Sandak, Reference Perfetti and Sandak2000). Following this assumption, we might modify the BIA+ model to specify that signed phonological forms are activated only subsequently to spoken phonological forms (see Figure 1: Alternative 1). An even more extreme interpretation of this view would hold that signed languages are so different from spoken languages in their phonological representations that only conscious translation of English orthographic forms into ASL could explain the activation of ASL phonological forms with no lateral connections between the phonological lexical forms of the two languages.

Alternative 1 could be criticized on the basis of the fact that deaf bilinguals are unlikely to have rich spoken language phonological representations due to their limited auditory experience. If spoken language phonological forms are eliminated from this model, then the possibility arises that deaf bilinguals could map orthographic lexical forms directly to semantics without activating phonological word forms at all (see Figure 1: Alternative 2). This alternative model predicts that deaf bilinguals might be significantly faster to process written words than hearing monolinguals or bilinguals due to less diffusion of activation through the lexicon, and less competition of lexical alternatives during the course of word recognition. However, if direct mappings between orthographic word forms and semantics are the only associations impacting lexical access, then any facilitation or inhibition due to activation of phonological forms from the signed language would be post-lexical in nature. For instance, Morford and colleagues (Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011, Reference Morford, Kroll, Piñar and Wilkinson2014) have shown that phonological similarity between the sign translations of word pairs presented in print affects the reaction times of sign–print bilinguals. Specifically these bilinguals are slower at making semantic similarity judgments of English word pairs that are semantically unrelated but whose sign translations are phonologically related than when the sign translations are also phonologically unrelated. Correspondingly, reaction times to semantically related pairs are faster when the sign translations are also phonologically related. A model that predicts direct mappings between orthographic and semantic forms bypassing phonology completely would only predict this type of evidence if the participants were able to activate the ASL lexical forms subsequent to accessing shared semantic representations.

An alternative to either of these views has been put forward by Ormel (Reference Ormel2008; cf. Ormel et al., Reference Ormel, Hermans, Knoors and Verhoeven2012), who proposes vertical connections between lexical orthographic form representations and semantics, and lateral connections between lexical orthographic forms and lexical phonological forms in the signed language (See Figure 1: Alternative 3). In a separate study, Hermans, Ormel, Knoors & Verhoeven (Reference Hermans, Knoors, Ormel and Verhoeven2008) propose a model of vocabulary development in deaf bilingual signers in which L2 orthographic lexical forms are initially mapped directly to L1 phonological lexical forms (signed phonological forms). In order to comprehend written word forms, signers in early stages of development mediate their comprehension of written words by activating signed phonological forms. Subsequently, deaf bilingual signers acquire L2 spoken phonological forms, and learn to associate these forms with meaning. They also develop direct access of meaning from L2 orthographic forms. Not until a third and final stage in development do deaf children create associations between L2 phonological and L2 orthographic lexical forms. The implication of this model for adult sign–print bilinguals is that although L2 orthographic lexical forms will activate L2 phonological lexical forms, the most entrenched associations are between L2 orthographic lexical forms and semantics as well as L1 signed phonological forms (Kubuş et al., Reference Kubuş, Villwock, Morford and Rathmann2015; Morford et al., Reference Morford, Kroll, Piñar and Wilkinson2014). If adult sign–print bilinguals have weaker associations from print to spoken phonological forms (i.e., L2 phonological forms) than to signed phonological forms (i.e., L1 phonological forms) as a result of their acquisition history, this slightly modified version of Ormel's model makes yet another set of predictions for patterns of activation from print to sign. Deaf bilinguals should be as fast responding to L2 orthographic forms as hearing monolinguals are when responding to L1 orthographic forms by virtue of dampened activation of spoken phonological forms and reduced competition of lexical alternatives. However, cross-language phonological priming and inhibition effects should be quite robust, occurring soon after uptake of the input language according to this model since L2 orthographic forms became associated with L1 phonological forms earlier in the development of the lexicon than with L2 phonological forms.

The current study explores these possible configurations of bilingual lexical access for deaf sign–print bilinguals. While the predictions are illustrated as variations on the BIA+ model, our claims are not tied to a PDP architecture per se. Similar alternatives could be illustrated with a model that uses a connectionist architecture, such as Shook & Marian's (Reference Shook and Marian2013) Bilingual Language Interaction Network for Comprehension of Speech (BLINCS). In BLINCS, the relative strengths of L2 orthography to L1 phonology vs. L2 phonology mappings would emerge across training epochs due to sparse L2 phonological representations during L2 orthographic exposure.

Our study compares the overall speed and accuracy of decisions to English written word forms between hearing English monolinguals and deaf ASL–English bilinguals, as a diagnostic for the activation of phonological forms. Additionally, by manipulating the time course of a semantic similarity judgment task in which participants see pairs of English words, we explore whether deaf bilinguals exhibit cross-language phonological priming effects at both shorter and longer time courses. Guo, Misra, Tam & Kroll (Reference Guo, Misra, Tam and Kroll2012) have pointed out that in a variety of bilingual lexical processing tasks, evidence of activation of the L1 phonological form is associated with a long SOA, while studies using a shorter SOA have not found evidence of L1 phonological form activation, particularly in highly skilled bilinguals (Sunderman & Kroll, Reference Sunderman and Kroll2006; Talamas, Kroll & Dufour, Reference Talamas, Kroll and Dufour1999), suggesting that, at least at higher levels of L2 proficiency, word recognition in the L2 does not need to be mediated by L1 lexical form activation.

Most studies documenting cross-language activation, including Guo et al. (Reference Guo, Misra, Tam and Kroll2012), present word forms from both languages (De Groot & Nas, Reference De Groot and Nas1991; Nas, Reference Nas1983). With explicit activation of both languages, even masked priming has been shown to elicit cross-language activation (Bijeljac Babic et al., Reference Bijeljac Babic, Biardeau and Grainger1997; Brysbaert, Van Dyck & Van de Poel, Reference Brysbaert, Van Dyck and Van de Poel1999). But studies that have not presented any stimuli in the non-target language, as in our study, have always used long SOAs. Wu & Thierry (Reference Wu and Thierry2010) used SOAs of 1000 to 1200 ms with a monolingual semantic similarity judgment task (cf. Thierry & Wu, Reference Thierry and Wu2004, Reference Thierry and Wu2007). Similarly long SOAs were used in studies requiring bilinguals to complete a monolingual letter-counting task (Martin, Costa, Dering, Hoshino, Wu & Thierry, Reference Martin, Costa, Dering, Hoshino, Wu and Thierry2012) and to decide whether a target is a geometric figure or a word (Wu, Cristino, Leek & Thierry, Reference Wu, Cristino, Leek and Thierry2013). Prior studies specifically looking at cross-language activation in deaf sign–print bilinguals have also used a SOA of 1000 ms (Kubuş et al., Reference Kubuş, Villwock, Morford and Rathmann2015; Morford et al., Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011, Reference Morford, Kroll, Piñar and Wilkinson2014). In the current study, we shortened this SOA to 750 ms in the long SOA condition, and following Guo et al. (Reference Guo, Misra, Tam and Kroll2012), we selected 300 ms for the short SOA condition. Our prediction is that at the long SOA, we will replicate previous lexical co-activation effects for sign–print bilinguals, since there would be sufficient time for the ASL L1 translations to become activated regardless of the architecture of the bilingual lexicon. In the short SOA condition, if English orthographic forms are not directly associated with ASL phonological forms (Alternatives 1 & 2), we predict no L1 phonology co-activation effects. By contrast, a direct mapping of L2 orthographic forms to L1 phonological forms (Alternative 3) predicts that effects of cross-language activation would be detected even with a much shorter SOA. No prior studies have attempted to uncover effects of cross-language activation using a monolingual task with such a short SOA. If deaf sign–print bilinguals complete the task as quickly as monolinguals and nevertheless show the cross-language activation effects, this will provide initial evidence for direct mappings between L2 orthography and L1 phonology in sign–print bilinguals.

Method

Participants

There were three groups of participants. The first group consisted of 29 Deaf Balanced Bilinguals, a group of deaf signers who were highly proficient in ASL and English. The second group consisted of 24 Deaf ASL-Dominant Bilinguals, a group of deaf signers who were more proficient in ASL than in English. The third group consisted of 43 Hearing Monolinguals, who had acquired English from birth, and had no knowledge of ASL.

The deaf bilingual participants were recruited from Gallaudet University, and were paid $20/hour for their participation in the experiment. Ninety participants were recruited through fliers targeting students who consider themselves bilingual in ASL and English. Data from 37 participants were eliminated: 5 due to equipment failure or experimenter error, 9 due to failure to complete the protocol or follow directions, 4 due to onset of deafness after age 2, 9 due to low accuracy (less than 80% correct) on the experimental task, and 10 due to low accuracy (less than 45% correct)Footnote 2 on the ASL assessment task. ASL proficiency was assessed with the American Sign Language - Sentence Repetition Task (ASL-SRT, Hauser, Paludnevičienė, Supalla & Bavelier, Reference Hauser, Paludnevičienė, Supalla, Bavelier and de Quadros2008). The remaining 53 participants were grouped into Balanced and ASL-dominant bilingual groups based on their performance on the Passage Comprehension subtest of the Woodcock–Johnson III Tests of Achievement (WJ) which was used to assess English proficiency. Twenty-nine participants scored 35 (Grade equivalent 8.9) or above on the WJ and were assigned to the Balanced Bilingual group. Twenty-four participants scored 34 or below and were assigned to the ASL-dominant Bilingual group. Table 1 lists the number of females, the mean age, mean ASL-SRT and mean WJ score for each group.

Table 1. Mean (sd) and Range of Background Characteristics of Deaf Bilinguals

The Hearing Monolingual group was recruited from the undergraduate psychology pool at Penn State University, and completed the experiment for course credit. The average age of the 43 participants (37 female) was 19 (range [18, 21]). Criteria for inclusion in the study included being a native speaker of English, having no prior knowledge of ASL nor any degree of proficiency in another signed language, and no history of hearing or speech disorders. All hearing participants also reported being born in the US, and speaking English as the primary language in the home. They rated their reading ability in English on a scale of 1 to 10. Average self-rating for reading ability in English was at ceiling at 9.79 (s = .51).

Materials

The materials included 440 English word pairs with phonologically related (n = 220) and phonologically unrelated (n = 220) translation equivalents in ASL. Hearing native speakers of English rated word pairs on a scale from 1 (semantically unrelated) to 7 (semantically related). Word pairs with mean ratings below 2.8 were classified as semantically unrelated (n = 170), and pairs with mean ratings above 4.2 were classified as semantically related (n = 196). Phonological similarity was defined as sharing a minimum of two formational parameters, either handshape and location, handshape and movement, or location and movement. English word pairs with phonologically-related and unrelated ASL translations did not differ in frequency or length (see Table 2).

Table 2. Lexical characteristics of the English stimuli by condition

Procedure

Participants first completed a background questionnaire. Deaf participants were then given the language assessment tasks. The experimental task was programmed in E-Prime Professional 2.0 (Psychology Software Tools, Inc., Sharpsburg, PA). Ten practice trials with feedback on accuracy preceded the experiment. Participants had to obtain 80% accuracy before proceeding to the experimental trials. Two blocks of experimental trials were presented. One block used a short stimulus onset asynchrony (SOA, 300 ms) and one block used a long SOA (750 ms). The order of blocks and the assignment of stimulus lists to blocks was counterbalanced across participants such that half of the participants completed the short SOA block prior to the long SOA block, and vice versa. The practice trials were programmed to match the SOA of the first block that participants completed. Each participant responded only once to a target word pair, either in the short or the long SOA condition. Experimental trials began with a 500 ms fixation cross. In the short SOA condition, the first stimulus word was presented for 250 ms followed by a 50 ms interstimulus interval (ISI). The second stimulus word was presented and remained on the screen until participants responded. In the long SOA condition, the first stimulus word was presented for 250 ms followed by a 500 ms ISI. The second word remained on the screen until participants responded. Participants were asked to determine whether the English word pairs were semantically related or unrelated. Participants responded by selecting a keyboard button labeled ‘yes’ with their dominant hand, or ‘no’ with their non-dominant hand. Because semantically related and unrelated trials were analyzed separately, we did not counterbalance responses across hands.

To minimize the influence of lexical variation in ASL on the experimental results, deaf participants translated English targets into ASL after completing the experimental task. ASL translations were used to eliminate trials on which participants’ signs did not conform to the condition criteria of phonological relatedness. For example, for the English target candy, most signers produced a sign in which the index finger contacts the cheek and then is rotated. This sign is closely phonologically related to the ASL translation of the English word bored, produced with the same handshape and movement but located at the side of the nose. This pair of targets was included in the semantically unrelated but phonologically related condition (candy-bored). However, several signers produced the polysemous ASL sign commonly glossed sugar in response to the English target candy. This sign is produced at the chin with all fingers extended instead of just the index, and thus is not phonologically related to the ASL translation of the English word bored. For the specific participants who completed the translation task using the sugar variant in response to candy, we eliminated the candy-bored trial. Response times two standard deviations above and below the mean for each participant were also excluded. This resulted in the exclusion of 4.9% of the responses of the Balanced Bilingual group, 8.7% of the responses of the ASL-dominant Bilingual group, and 4.6% of the Hearing Monolingual group.

Results

Response time and accuracy data were analyzed using a 3-way mixed ANOVA across participants (F1) and items (F2), with repeated measures on the within-subjects variables, Phonology (related vs. unrelated translation equivalents in ASL) and Stimulus Onset Asynchrony - SOA (300 ms, 750 ms). The between-subjects variable was Group (Balanced bilinguals, ASL-dominant bilinguals, Hearing monolinguals). An analysis of the effect of the type of phonological relationship on semantic similarity judgments, i.e., whether the translation equivalents overlapped in handshape, location or movement, is reported in Occhino-Kehoe et al. (in preparation). We analyzed the semantically unrelated and the semantically related conditions separately since the former condition required a no-response and the latter condition required a yes-response.

Semantically unrelated condition

In the semantically unrelated condition, there was a significant main effect of Group on response time, F1 (2, 93) = 4.04, p < .05, η2 P = .08, F2 (2, 336) = 237.99, p < .001, η2 P = .586. Deaf balanced bilinguals responded significantly faster (779 ms) than the hearing monolinguals (892, ms, p < .01). Deaf ASL-dominant bilinguals were slower than the balanced bilinguals but faster than the monolinguals; however, they did not differ significantly from either of the other groups (848 ms). The effect of Group was modified by an interaction of Group and Phonology, F1 (2, 93) = 6.07, p < .01, η2 P = .12, F2 (2, 336) = 3.22, p < .05, η2 P = .019. Replicating the effects of Morford et al. (Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011, Reference Morford, Kroll, Piñar and Wilkinson2014), balanced and ASL-dominant bilinguals were significantly slower when responding to semantically unrelated English word pairs with phonologically related (793 ms, 863 ms respectively) than with unrelated (764 ms, 833 ms respectively) ASL translations. The monolingual group showed no effect of Phonology (892 ms vs. 891 ms in the related and unrelated conditions respectively). Note that the two bilingual groups were faster in both phonology conditions than the monolinguals suggesting that they are faster even when a conflict in semantic and phonological relatedness is slowing their performance.

There was no significant main effect of SOA. To test the hypothesis that English print initially activates only English phonological forms, and subsequently activates ASL phonological forms, we completed paired comparisons to determine whether the effect of Phonology on the deaf bilinguals was present at the long but not the short SOA. In the subject analysis, the effect of Phonology was significant at both SOAs, but in the item analysis, the effect of Phonology was significant only for the long SOA (see Tables 3a and 3b).

Table 3a. Mean RT (sd) in ms by Group at Short and Long SOAs in the Semantically Unrelated Condition

Table 3b. Effect of Phonology at Short and Long SOAs in the Semantically Unrelated Condition

For the semantically unrelated items, there was not a significant effect of Group on accuracy. However, there was a significant main effect of Phonology on accuracy, F1 (1, 93) = 37.25, p < .001, η2 P = .29, F2 (1, 168) = 4.67, p < .05, η2 P = .03. Performance on the English word pairs with phonologically unrelated ASL translations was more accurate than on the English word pairs with phonologically related ASL translations. This effect was modulated by an interaction of Phonology and Group, F1 (2, 93) = 5.38, p < .01, η2 P = .10, F2 (2, 336) = 4.54, p < .02, η2 P = .03. As with response time, only the deaf participants were affected by the Phonology manipulation. Deaf participants made more errors when the English word pairs had phonologically related ASL translations (Balanced 12.2%, ASL-dominant 14.9%) than unrelated translations (Balanced 7.5%, ASL-dominant 10.7%). The monolinguals had similar error rates in the two conditions (11.7% phonologically related in ASL, 10.7% phonologically unrelated in ASL). For the accuracy results, the groups had comparable accuracy rates in the phonologically unrelated condition, but the bilingual participants made more errors than the monolingual group in the phonologically related condition. No other effects or interactions reached significance (see Table 3c).

Table 3c. Mean Error Rate (sd) by Group at Short and Long SOAs in the Semantically Unrelated Condition

Semantically related condition

Turning to the semantically related condition, the main effect of Group was significant by items, F2 (2, 388) = 65.00, p < .001, η2 P = .251, but only approached significance in the analysis by participants, F1 (2, 93) = 2.52, p = .09, η2 P = .051. Replicating prior studies that have found that cross-language phonological similarity can enhance performance in sign–print bilinguals (Morford et al., Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011, Reference Morford, Kroll, Piñar and Wilkinson2014), there was a significant interaction of Phonology and Group on reaction time, F1 (2, 93) = 14.14, p < .001, η2 P = .23, F2 (2, 388) = 9.65, p < .001, η2 P = .05. Balanced bilinguals and ASL-dominant bilinguals were faster when responding to semantically related English word pairs with phonologically related ASL translations than to English words with unrelated ASL translations (708 vs. 758 ms for Balanced Bilinguals, p < .05; 746 vs. 805 ms for ASL-dominant bilinguals, p < .01). Monolinguals, by contrast, did not differ in the two conditions (791 ms vs. 801 ms). There was also a main effect of SOA on reaction time, F1 (2, 93) = 7.23, p < .01, η2 P = .07, F2 (2, 388) = 16.06, p < .001, η2 P = .08. Responses were faster at the shorter SOA (758 ms) compared to the longer SOA (777 ms). SOA did not interact with Group or Phonology. Paired comparisons evaluating the effect of Phonology at the short and long SOAs for the two bilingual groups produced mixed results. For the balanced bilinguals, the effect of Phonology was significant at the short SOA only in the subject analysis, but not in the item analysis. The effect of Phonology at the long SOA was significant in both subject and item analyses. For the ASL-dominant bilinguals, the effect of Phonology was significant in both subject and item analyses at the short SOA, but at the long SOA, the effect of Phonology was significant in the subject analysis, but only approached significance in the item analysis (see Tables 4a and 4b).

Table 4a. Mean RT (sd) in ms by Group at Short and Long SOAs in the Semantically Related Condition

Table 4b. Effect of Phonology at Short and Long SOAs in the Semantically Related Condition

The only effect to reach significance in the analysis of the accuracy data in the semantically related condition was a main effect of Group, F1 (2, 93) = 5.37, p < .01, η2 P = .10, F2 (2, 388) = 20.44, p < .001, η2 P = .10. The Hearing Monolinguals made fewer errors (14.5%) than either the Balanced Bilinguals (19.2%) or the ASL-dominant bilinguals (18.6%). Group differences in accuracy may reflect the fact that hearing monolinguals encounter English words in a broader range of contexts due to their extensive exposure to both spoken and written English, whereas the deaf bilinguals have more restricted exposure to English. The two bilingual groups did not differ in their accuracy (see Table 4c).

Table 4c. Mean Error Rate (sd) by Group at Short and Long SOAs in the Semantically Related Condition

Discussion

We investigated how orthographic word forms activate phonological word forms in deaf sign–print bilinguals and hearing English monolinguals. Study participants completed an English semantic similarity judgment task. Half of the stimuli had phonologically related translations in ASL, allowing us to evaluate whether ASL phonological forms were active during the processing of English orthographic word forms. Critically, we manipulated the time course of the experiment to determine whether the activation of ASL phonological forms by English orthographic forms could be eliminated if participants did not have sufficient time to engage in post-lexical activation of ASL phonological forms, or in conscious translation. We replicated prior findings of cross-language activation between English print and ASL signs despite the introduction of a much faster rate of presentation of the stimuli. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological forms when given ample time for strategic or conscious translation across their two languages.

Further, we found that deaf bilinguals who are highly proficient in both ASL and English performed the experimental task much faster than hearing monolinguals with no cost to accuracy. The difference between these populations was particularly pronounced when participants were rejecting word pairs that were semantically unrelated. In this condition, deaf balanced bilinguals were more than 100 ms faster than hearing monolinguals on average. This result is particularly interesting since the phonological manipulation had no impact on the monolinguals but actually slowed performance of the deaf bilinguals when the English word pairs had phonologically similar translations in ASL. This result should allay any concerns that bilingualism may have negative consequences for language processing in deaf individuals. Indeed, the deaf ASL signers are faster responding to English words than English monolinguals. Further, the timing differences between the deaf and hearing participants makes it highly unlikely that ASL phonological forms are only activated after English phonological forms and semantics have both been activated (Figure 1: Alternative 1). The deaf balanced bilinguals were significantly faster than the hearing monolinguals, and the deaf ASL-dominant bilinguals were comparable in reaction time to the hearing monolinguals. While the faster performance of the balanced bilinguals could be argued to reflect a selection effect of including only highly proficient bilinguals in this group, the fact that the ASL-dominant group performed as quickly in their non-dominant language as the monolingual group performed in their dominant (and only) language makes the relative response times of the two groups more compelling. If the translation equivalents were activated post-lexically, then inhibition introduced through the activation of the translation equivalents should be apparent in protracted response times of the experimental groups relative to the monolingual control group. This was not the case.

Two characteristics of language processing in deaf bilinguals may be relevant to understanding why deaf bilinguals are so fast in processing English orthographic forms. First, despite many claims that deaf readers must recode orthographic forms to spoken phonological forms in order to become good readers (Perfetti & Sandak, Reference Perfetti and Sandak2000; Wang et al., 2008), recent studies indicate that activation of spoken phonological codes does not distinguish good and poor deaf readers (Allen, Clark, del Giudice, Koo, Lieberman, Mayberry & Miller, Reference Allen, Clark, del Giudice, Koo, Lieberman, Mayberry and Miller2009; Chamberlain & Mayberry, Reference Chamberlain and Mayberry2008; Mayberry, del Giudici & Lieberman, Reference Mayberry, Giudice and Lieberman2011). Indeed, Bélanger, Baum & Mayberry (Reference Bélanger, Baum and Mayberry2012) found that deaf readers relied on orthographic but not spoken language phonological representations in both pre-lexical (masked priming) and post-lexical (recall) visual word processing tasks. Second, a recent ERP study of lexical access of ASL by deaf signers indicates that semantics may be accessed earlier in signers than speakers, and that phonological form processing of signs may be linked with semantic processing more closely than has been found for lexical access of spoken words (Gutierrez, Williams, Grosvald & Corina, Reference Gutierrez, Williams, Grosvald and Corina2012). Our finding that deaf bilinguals are as fast or faster than monolingual speakers of English to process English print words while also exhibiting a cross-language activation effect makes the proposal that deaf bilinguals’ ASL knowledge is merely an appendage to their representation of English orthography and phonology untenable. The question is how signed phonological representations are integrated with the orthographic and phonological representations of a spoken language, and whether this integration changes the time course of processing of orthographic word forms.

We investigated this question by manipulating the time course of the experiment. Importantly, the study results replicate prior studies showing that ASL phonological forms were activated in a monolingual English task (Morford et al., Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011; Reference Morford, Kroll, Piñar and Wilkinson2014). The evidence for non-selective lexical access in deaf sign–print bilinguals is quite robust. In past studies using the semantic similarity judgment paradigm in participants’ L2, the stimulus onset asynchrony (SOA) has been comparatively long, about 1000 ms (Thierry & Wu, Reference Thierry and Wu2007; Morford et al., Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011). Guo et al. (Reference Guo, Misra, Tam and Kroll2012) investigated whether effects of L1 form interference in the translation recognition task can be eliminated by reducing the SOA from 750 ms to 300 ms in an ERP study of Chinese–English bilinguals. At both SOAs, they found behavioral evidence of form and meaning interference in translation recognition performance, but the ERP record differed across the two conditions. In the short SOA condition, there was no significant effect of the form distractor on the P200. Guo et al. interpreted the results as evidence that bilinguals were not accessing the L1 translations of the L2 stimuli in order to complete the task. Their behavioral results do show effects of cross-language activation at the short SOA, but unlike the current study, their participants were presented with lexical forms in both languages, a target word and a translation or distractor from the non-target language.

We chose to implement a similar manipulation of SOA, this time in a monolingual task, comparing performance on English semantic similarity judgments when the second word was presented 300 ms or 750 ms after the first. At the long SOA, even a serial model positing an initial activation of an English phonological form and/or semantics prior to an ASL phonological form would be consistent with the co-activation results that we found, since there would be sufficient time for the ASL translation of the first stimulus to become activated, as well as all the phonological neighbors of that sign, which would presumably include the ASL translation of the second English stimulus. However, even at the shorter SOA, deaf bilinguals’ responses were influenced by the activation of L1 ASL phonological forms. These results are more consistent with a model of bilingual lexical access in which English orthographic forms directly activate ASL phonological forms (Figure 1: Alternative 3) than an indirect activation of ASL phonological forms subsequent to the activation of English phonological forms (Figure 1: Alternatives 1 or 2).

Studies of hearing monolinguals and bilinguals have generally proposed earlier activation of phonological than of semantic representations on the basis of orthographic word forms. Perfetti & Tan (Reference Perfetti and Tan1998), for example, argue that orthographic-phonological relationships are privileged over orthographic-semantic relationships due to the fact that the former are more reliable and the latter are more context-dependent. In other words, a single orthographic form is more likely to have multiple meanings than multiple pronunciations, so readers are more confident in selecting the appropriate phonological form associated with the orthographic form than the meaning. For deaf readers, this assumption should be questioned. If deaf readers are activating phonological forms from the spoken language, these forms may be as variable and unreliable as semantic representations due to restricted auditory experience, or due to the ontogenetic time course of the development of these representations (i.e., orthographic words are acquired prior to spoken phonological words). Alternatively, if deaf readers are activating phonological forms from the signed language, it is the relationship between sign phonology and semantics that is more predictable than the relationship between a spoken language orthography and a signed language phonology. Given the close relationship of form and meaning in signed languages (Wilcox, Reference Wilcox2004), it may be rash to assume earlier activation of spoken phonological representations than of semantic representations for deaf bilinguals during written word processing. One approach that could bring new insights to this question is to attempt masked priming (cf. Bélanger et al., Reference Bélanger, Baum and Mayberry2012), but to select the stimuli so that the orthographic prime is designed to activate a cross-language competitor to the target's translation equivalent.

Studies of hearing bilinguals who acquire spoken languages with different orthographies, such as Japanese–English (Hoshino & Kroll, Reference Hoshino and Kroll2008) and Chinese–English (Wu & Thierry, Reference Wu and Thierry2010) bilinguals, have found that overlap in the written form of words in two spoken languages is not a requirement for cross-language activation. The phonology of word forms in two spoken languages is activated by orthographic forms even when the orthography is not shared. Our study demonstrates that overlap in the phonological forms is also not a prerequisite of cross-language activation. It may be precisely because the cross-language phonological forms and/or articulatory routines activated by the orthographic string do not overlap that cross-language activation is so robust in sign–print bilinguals. With little need to inhibit lexical alternatives from multiple languages activated by print, deaf bilinguals may experience reinforcement from the simultaneous activation of signed and spoken phonological forms during print word processing. This interpretation is consistent with studies demonstrating faster processing of polysemous words and cognates relative to control words (Eddington & Tokowicz, Reference Eddington and Tokowicz2015; Lemhöfer et al., Reference Lemhöfer, Dijkstra, Schriefers, Baayen, Grainger and Zwitserlood2008) and to studies demonstrating lexical consolidation across modalities (Bakker, Takashima, van Hell, Janzen & McQueen, Reference Bakker, Takashima, van Hell, Janzen and McQueen2014). Shook & Marian (Reference Shook and Marian2012) have proposed a similar explanation for early parallel activation of signed and spoken phonological representations in hearing bimodal bilinguals. They used a visual world paradigm to explore whether hearing bimodal bilinguals would engage in parallel activation of ASL while listening to spoken English. They found very early activation of the ASL labels of the objects pictured in the display, and draw a parallel between the cross-modal activation of visual phonological representations in ASL and spoken phonological representations in English with the activation of orthographic and phonological representations in monolinguals. Our results are not cross-modal – both English orthographic and ASL phonological forms are visual. Nevertheless, what is shared by all of these studies is very early and robust parallel activation when forms are not in competition.

Finally, these results suggest that a model of the bilingual lexicon will not be able to account for lexical processing in all bilinguals unless it can accommodate direct mappings between L2 orthographic word forms and L1 phonological word forms. Such mappings may be inconsequential for bilinguals who have access to the phonological system that an orthography was designed to capture but, in the absence of those phonological representations, bilinguals appear to be able to build robust associations between phonological forms from a different language to the target orthographic word forms. In other words, associations between phonological and orthographic representations need not be merely those that the orthographic system was designed to capture. Presumably, this aspect of the configuration of the mental lexicon could apply to hearing bilinguals as well, but only in contexts in which there is sufficient exposure to meaningful orthographic word forms without exposure to the phonological word forms captured by the orthography.

In sum, the results replicate and extend previous studies demonstrating cross-language activation in deaf sign–print bilinguals (Kubuş et al., Reference Kubuş, Villwock, Morford and Rathmann2015; Morford et al., Reference Morford, Kroll, Piñar and Wilkinson2014; Morford et al., Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011; Ormel et al., Reference Ormel, Hermans, Knoors and Verhoeven2012). Deaf sign–print bilinguals but not hearing English monolinguals respond faster to semantically similar English word pairs with phonologically related ASL translations and slower to semantically unrelated English word pairs with phonologically related ASL translations than with phonologically unrelated ASL translations. The current study replicated this pattern even though the time course of presentation of the English stimuli was much shorter (300 ms) than in prior studies (1000 ms). The results contribute to the growing support for language nonselective lexical access in deaf sign–print bilinguals. Further, the study found evidence that deaf sign–print bilinguals are considerably faster than hearing monolinguals in making decisions about English words. This novel finding is an indication that deaf bilinguals may benefit from differences in lexical access unique to signed languages that extend to the processing of orthographic words. Namely, stronger, more direct connections between semantics and L1 phonology in signers may, in turn, result in faster connections between L2 orthography and semantics. Whether activation of L1 phonological forms is necessary for this pattern of activation to become established, or whether it is independent of L1 phonological word form processing, requires further investigation.

Together, the combined results of the co-activation effects and the speed of lexical processing help to distinguish between the various models of bilingual word recognition proposed in the introduction to this article. Specifically, the pattern of results is not consistent with a model of word recognition in which orthographic strings would activate sublexical phonological units in the spoken language prior to activating lexical level representations in the signed language. Of the proposed models, these data are most consistent with a model in which orthographic word forms activate semantics as well as signed phonological forms (Ormel, Reference Ormel2008, Ormel et al., Reference Ormel, Hermans, Knoors and Verhoeven2012), and in which spoken phonological forms are less entrenched and less likely to shape lexical processing in a deterministic manner. The fact that phonological representations in the two languages have different underlying motor and sensory representations could allow for a unique configuration of the bilingual lexicon that is specific to sign–print bilinguals.

Footnotes

*

We would like to thank the participants of our research, as well as Selina Agyen, Benjamin Anible, Richard Bailey, Brian Burns, Yunjae Hwang, Teri Jaquez, Carla Ring, and Paul Twitchell for help in programming, data collection, coding and analysis. Portions of this study were presented at the 11th Theoretical Issues in Sign Language Research Conference in London, England. This research was supported by the National Science Foundation Science of Learning Center Program, under cooperative agreement numbers SBE-0541953 and SBE-1041725. The writing of this article was also supported in part by NIH Grant HD053146 and NSF Grants BCS-0955090 and OISE-0968369 to Judith F. Kroll. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the National Institutes of Health or the National Science Foundation.

1 The sublexical structure of signs is traditionally described with four formational parameters: handshape, location, movement and orientation (Battison, Reference Battison1978; Stokoe, Croneberg, & Casterline, Reference Stokoe, Croneberg and Casterline1965).

2 The ASL-SRT is undergoing standardization and item analysis. A sample (n = 23) of native signers scored 66% correct on the initial version of the test, with a standard deviation of 10.2%. The minimum accuracy score on the ASL-SRT for the current study was selected by calculating two s.d. below the mean for native signers.

References

Allen, T. E., Clark, M. D., del Giudice, A., Koo, D., Lieberman, A., Mayberry, R., & Miller, P. (2009). Phonology and reading: A response to Wang, Trezek, Luckner, and Paul. American Annals of the Deaf, 154, 338345.CrossRefGoogle Scholar
Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Competition from unseen or unheard novel words: Lexical consolidation across modalities. Journal of Memory and Language, 73, 116130.CrossRefGoogle Scholar
Battison, R. (1978). Lexical borrowing in American Sign Language. Silver Spring, MD: Linstok Press.Google Scholar
Bélanger, N. N., Baum, S. R., & Mayberry, R. I. (2012). Reading difficulties in adult deaf readers of French: Phonological codes, not guilty! Scientific Studies of Reading, 16, 263285.Google Scholar
Bijeljac Babic, R., Biardeau, A., & Grainger, J. (1997). Masked orthographic priming in bilingual word recognition. Memory & Cognition, 25, 447457.Google Scholar
Bishop, M., & Hicks, S. (2005). Orange Eyes: Bimodal bilingualism in hearing adults from Deaf families. Sign Language Studies, 5, 188230.CrossRefGoogle Scholar
Brysbaert, M., Van Dyck, G., & Van de Poel, M. (1999). Visual word recognition in bilinguals: Evidence from masked phonological priming. Journal of Experimental Psychology: Human Perception and Performance, 25, 137148.Google ScholarPubMed
Chamberlain, C., & Mayberry, R. I. (2008). ASL syntactic and narrative comprehension in skilled and less skilled adults readers: Bilingual-bimodal evidence for the linguistic basis of reading. Applied Psycholinguistics, 28, 537549.Google Scholar
Colin, S., Magnan, A., Ecalle, J., & Leybaert, J. (2007). Relation between deaf children's phonological skills in kindergarten and word recognition performance in first grade. Journal of Child Psychology and Psychiatry, 48, 139146.Google Scholar
De Groot, A. M., & Nas, G. L. (1991). Lexical representation of cognates and noncognates in compound bilinguals. Journal of Memory and Language, 30, 90123.CrossRefGoogle Scholar
Dijkstra, T., & van Heuven, W. J. B. (2002). The architecture of the bilingual word recognition system: From identification to decision. Bilingualism: Language and Cognition, 5, 175197.CrossRefGoogle Scholar
Eddington, C. M., & Tokowicz, N. (2015). How meaning similarity influences ambiguous word processing: The current state of the literature. Psychonomic Bulletin & Review, 22, 1337.Google Scholar
Emmorey, K., Borinstein, H. B., Thompson, R., & Gollan, T. H. (2008). Bimodal bilingualism. Bilingualism: Language and cognition, 11, 4361.CrossRefGoogle ScholarPubMed
Guo, T., Misra, M., Tam, J. W., & Kroll, J. F. (2012). On the time course of accessing meaning in a second language: An electrophysiological and behavioral investigation of translation recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 11651186.Google Scholar
Gutierrez, E., Williams, D., Grosvald, M., & Corina, D. (2012). Lexical access in American Sign Language: An ERP investigation of effects of semantics and phonology. Brain Research, 1468, 6383.Google Scholar
Hauser, P. C., Paludnevičienė, R., Supalla, T., & Bavelier, D. (2008). American Sign Language-Sentence Reproduction Test. In de Quadros, R. M. (ed.), Sign languages: Spinning and unraveling the past, present and future. TISLR 9, forty-five papers and three posters from the 9. Theoretical Issues in Sign Language Research Conference, Florianopolis, Brazil, December 2006, pp. 160172. Petrópolis/RJ, Brazil: Editora Arara Azul.Google Scholar
Hermans, D., Knoors, H., Ormel, E., & Verhoeven, L. (2008). The relationship between the reading and signing skills of deaf children in bilingual education programs. Journal of Deaf Studies and Deaf Education, 13, 518530.Google Scholar
Hoffmeister, R. J., & Caldwell-Harris, C. L. (2014). Acquiring English as a second language via print: The task for deaf children. Cognition, 132, 229242.Google Scholar
Hoshino, N., & Kroll, J. F. (2008). Cognate effects in picture naming: Does cross-language activation survive a change of script? Cognition, 106, 501511.CrossRefGoogle ScholarPubMed
Jared, D., & Kroll, J. F. (2001). Do bilinguals activate phonological representations in one or both of their languages when naming words? Journal of Memory and Language, 44, 231.Google Scholar
Johnson, J. M., Watkins, R. V., & Rice, M. L. (1992). Bimodal bilingual language development in a hearing child of deaf parents. Applied Psycholinguistics, 13, 3152.Google Scholar
Kubuş, O., Villwock, A., Morford, J. P., & Rathmann, C. (2015). Word recognition in deaf readers: Cross-language activation of German Sign Language and German. Applied Psycholinguistics, 36, 831854, doi:10.1017/S0142716413000520.Google Scholar
Kuntze, M. (2004). Literacy acquisition and deaf children: A study of the interaction of ASL and written English. PhD dissertation, Stanford University.Google Scholar
Lemhöfer, K., Dijkstra, T., Schriefers, H., Baayen, R. H., Grainger, J., & Zwitserlood, P. (2008). Native language influences on word recognition in a second language: A megastudy. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1231.Google Scholar
Leybaert, J. (1993). Reading in the deaf: The roles of phonological codes. In Marschark, M. & Clark, M. D. (eds.), Psychological perspectives on deafness, pp. 269309. Hillsdale, NJ: Erlbaum.Google Scholar
Marian, V., & Spivey, M. J. (2003). Competing activation in bilingual language processing: Within- and between-language competition. Bilingualism: Language and Cognition, 6, 97115.CrossRefGoogle Scholar
Martin, C. D., Costa, A., Dering, B., Hoshino, N., Wu, Y. J., & Thierry, G. (2012). Effects of speed of word processing on semantic access: The case of bilingualism. Brain And Language, 120, 6165.Google Scholar
Mayberry, R. I., del Giudice, A. A., & Lieberman, A. M. (2011). Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis. Journal of Deaf Studies and Deaf Education, 16, 164188.Google Scholar
Morford, J. P., Kroll, J. F., Piñar, P., & Wilkinson, E. (2014). Bilingual word recognition in deaf and hearing signers: Effects of proficiency and language dominance on cross-language activation. Second Language Research, 30, 251271.Google Scholar
Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll, J. F. (2011). When deaf signers read English: Do written words activate their sign translations? Cognition, 118, 286292.Google Scholar
Nas, G. (1983). Visual word recognition in bilinguals: Evidence for a cooperation between visual and sound based codes during access to a common lexical store. Journal of Verbal Learning & Verbal Behavior, 22, 526534.CrossRefGoogle Scholar
Ormel, E. (2008). Visual word recognition in bilingual deaf children. PhD dissertation, Radboud University Nijmegen.Google Scholar
Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2012). Cross-Language effects in visual word recognition: The case of bilingual deaf children. Bilingualism: Language and Cognition, 15, 288303.Google Scholar
Perfetti, C. A., & Sandak, R. (2000). Reading optimally builds on spoken language: Implications for deaf readers. Journal of Deaf Studies and Deaf Education, 5, 3250.Google Scholar
Perfetti, C. A., & Tan, L. H. (1998). The time course of graphic, phonological, and semantic activation in Chinese character identification. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 101118.Google Scholar
Piñar, P., Dussias, P. E., & Morford, J. P. (2011). Deaf readers as bilinguals: An examination of deaf readers’ print comprehension in light of current advances in bilingualism and second language processing. Language and Linguistics Compass, 5, 691704.CrossRefGoogle Scholar
Shook, A., & Marian, V. (2012). Bimodal bilinguals co-activate both languages during spoken comprehension. Cognition, 124, 314324.Google Scholar
Shook, A., & Marian, V. (2013). The Bilingual Language Interaction Network for Comprehension of Speech. Bilingualism: Language and Cognition, 16, 304324.Google Scholar
Stokoe, W., Croneberg, C., & Casterline, D. (1965). A dictionary of American Sign Language on linguistic principles. Washington, DC: Gallaudet College Press.Google Scholar
Sunderman, G., & Kroll, J. F. (2006) First language activation during second language lexical processing: An investigation of lexical form meaning and grammatical class. Studies in Second Language Acquisition, 28, 387422.CrossRefGoogle Scholar
Supalla, S. J., Wix, T. R., & McKee, C. (2001). Print as a primary source of English for deaf learners. In Nicol, J. & Langendoen, D. T. (eds.), One Mind, Two Languages: Studies in Bilingual Language Processing, pp. 177190. Oxford: Blackwell Publishing.Google Scholar
Talamas, A., Kroll, J. F., & Dufour, R. (1999). From form to meaning: Stages in the acquisition of second-language vocabulary. Bilingualism: Language and Cognition, 2, 4558.Google Scholar
Thierry, G., & Wu, Y. J. (2004). Electrophysiological evidence for language interference in late bilinguals. NeuroReport, 15, 15551558.Google Scholar
Thierry, G., & Wu, Y. J. (2007). Brain potentials reveal unconscious translation during foreign-language comprehension. Proceedings of the National Academy of Sciences, 104, 1253012535.Google Scholar
Van Wijnendaele, I., & Brysbaert, M. (2002). Visual word recognition in bilinguals: Phonological priming from the second to the first language. Journal of Experimental Psychology: Human Perception and Performance, 28, 616627.Google Scholar
Wilcox, S. (2004). Cognitive iconicity: Conceptual spaces, meaning, and gesture in signed languages. Cognitive Linguistics, 15, 119147.Google Scholar
Wu, Y. J., Cristino, F., Leek, C., & Thierry, G. (2013). Non-selective lexical access in bilinguals is spontaneous and independent of input monitoring: Evidence from eye tracking. Cognition, 129, 418425.Google Scholar
Wu, Y., & Thierry, G. (2010). Chinese-English bilinguals reading English hear Chinese. The Journal of Neuroscience, 30, 76467651.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Three alternative models of bilingual word recognition in deaf sign-print bilinguals based on the BIA+ model (Dijkstra & Van Heuven, 2002)

Figure 1

Table 1. Mean (sd) and Range of Background Characteristics of Deaf Bilinguals

Figure 2

Table 2. Lexical characteristics of the English stimuli by condition

Figure 3

Table 3a. Mean RT (sd) in ms by Group at Short and Long SOAs in the Semantically Unrelated Condition

Figure 4

Table 3b. Effect of Phonology at Short and Long SOAs in the Semantically Unrelated Condition

Figure 5

Table 3c. Mean Error Rate (sd) by Group at Short and Long SOAs in the Semantically Unrelated Condition

Figure 6

Table 4a. Mean RT (sd) in ms by Group at Short and Long SOAs in the Semantically Related Condition

Figure 7

Table 4b. Effect of Phonology at Short and Long SOAs in the Semantically Related Condition

Figure 8

Table 4c. Mean Error Rate (sd) by Group at Short and Long SOAs in the Semantically Related Condition