Hostname: page-component-7b9c58cd5d-g9frx Total loading time: 0 Render date: 2025-03-16T02:15:03.379Z Has data issue: false hasContentIssue false

Neural substrates of sign language vocabulary processing in less-skilled hearing M2L2 signers: Evidence for difficult phonological movement perception

Published online by Cambridge University Press:  06 July 2017

JOSHUA T. WILLIAMS*
Affiliation:
Department of Psychological and Brain Sciences, Indiana University Program in Cognitive Science, Indiana University Speech and Hearing Sciences, Indiana University
ISABELLE DARCY
Affiliation:
Program in Cognitive Science, Indiana University Second Language Studies, Indiana University
SHARLENE D. NEWMAN
Affiliation:
Department of Psychological and Brain Sciences, Indiana University Program in Cognitive Science, Indiana University Program in Neuroscience, Indiana University
*
Address for correspondence: Joshua Williams, Cognitive Neuroimaging Laboratory, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405. willjota@indiana.edu
Rights & Permissions [Opens in a new window]

Abstract

No previous research has investigated the neural correlates of vocabulary acquisition in second language learners of sign language. The present study investigated whether poor vocabulary knowledge engaged similar prefrontal lexico-semantic regions as seen in unimodal L2 learners. Behavioral improvements in vocabulary knowledge in a cohort of M2L2 learners were quantified. Results indicated that there is significant increase in vocabulary knowledge after one semester, but stabilized in the second semester. A longitudinal fMRI analysis was implemented for a subset of learners who were followed for the entire 10 months during initial sign language acquisition. The results indicated that learners who had poor sign vocabulary knowledge consistently showed greater activation in regions involved in motor simulation, salience, biological motion and spatial processing, and lexico-semantic retrieval. In conclusion, poor vocabulary knowledge requires greater engagement of modality-independent and modality-dependent regions, which could account for behavioral evidence of difficulty in visual phonology processing.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2017 

1.1. Introduction

Many recent studies have explored how adults acquire a second language (for review see Kroll, Dussias, Bice & Perrotti, Reference Kroll, Dussias, Bice and Perrotti2015). These studies typically include hearing adults learning whose first language is a spoken language and are subsequently acquiring another spoken language. In comparison, relatively fewer studies have investigated individuals who have one spoken language and one sign language (i.e., bimodal bilinguals or M2L2 learners). Furthermore, the study of the bimodal bilingual brain has been mostly restricted to native bimodal bilinguals who acquired their languages simultaneously (Emmorey & McCullough, Reference Emmorey and McCullough2009; Emmorey, Giezen & Gollan, Reference Emmorey, Giezen and Gollan2015; Emmorey, Grabowski, McCullough, Ponto, Hichwa & Damasio, Reference Emmorey, Grabowski, McCullough, Ponto, Hichwa and Damasio2005). Thus, it is important to expand our knowledge of sign language acquisition after a spoken language has already been established because there may need to be substantial neuroplastic reorganization and adaptation to a new visual language, which diverges so greatly from learners’ native spoken language (Li, Abutalebi, Zou, Yan, Liu, Feng, Guo & Ding, Reference Li, Abutalebi, Zou, Yan, Liu, Feng, Guo and Ding2015; Zou, Ding, Abutalebi, Shu & Peng, Reference Zou, Ding, Abutalebi, Shu and Peng2012). Given that learning sign languages as a second language (L2) requires processing in different sensorimotor modalities (M2), M2L2 learners of sign languages are the perfect test case to explore modality-independent and modality-dependent neural mechanisms for L2 acquisition.

There have been several previous neuroimaging studies that have investigated the neural substrates of L2 vocabulary processing as well as functional and structural changes due to increasing proficiency (Grant, Fang & Li, Reference Grant, Fang and Li2015; Perani & Abutalebi, Reference Perani and Abutalebi2005; Saidi, Perlbarg, Marrelec, Pélégrini-Issac, Benali & Ansaldo, Reference Saidi, Perlbarg, Marrelec, Pélégrini-Issac, Benali and Ansaldo2013; Stein, Dierks, Brandeis, Wirth, Strik & Koenig, Reference Stein, Dierks, Brandeis, Wirth, Strik and Koenig2006; Stein, Federspiel, Koenig, Wirth, Strik, Wiest & Dierks, Reference Stein, Federspiel, Koenig, Wirth, Strik, Wiest and Dierks2012). For instance, bilinguals demonstrate prefrontal engagement during L2 lexico-semantic processing, which diminishes with increased L2 proficiency (Grant et al., Reference Grant, Fang and Li2015). Similar structural plasticity has been observed in left prefrontal areas (e.g., inferior frontal gyrus, IFG), such that higher proficiency learners have greater grey matter volume in the left IFG than low proficiency learners. Decreased functional activity and increased grey matter volume are thought to represent enhanced automaticity of lexico-semantic processing, which is due to reduced neural load and more efficient pathways, respectively, and is also a function of proficiency (Grant et al., Reference Grant, Fang and Li2015; Ishikawa & Wei, Reference Ishikawa and Wei2009; Stein et al., Reference Stein, Federspiel, Koenig, Wirth, Strik, Wiest and Dierks2012). Also, there is evidence that greater activation of cognitive control regions is required due to parallel competition between lexical items at initial stages of lexical acquisition (Abutalebi, Reference Abutalebi2008; Grant et al., Reference Grant, Fang and Li2015; Van Hell & Tanner, Reference van Hell and Tanner2012). Despite our growing knowledge of the neural substrates of L2 (within-modality) lexical acquisition, we still know relatively little about the acquisition of lexical signs.

The study of second language acquisition of sign language in adulthood is important to both our understanding of neuroplasticity and second language theory. Sign languages differ substantially from spoken languages in their primary articulators. Sign languages rely on arbitrary manual-visual phonetic codes to convey messages. Additionally, sign languages exploit the use of spatial dependencies for grammatical and discourse purposes (Sandler & Lillo-Martin, Reference Sandler and Lillo-Martin2006). The phonology of sign languages, as such, includes sublexical features that relate to the hands in space. The primary sublexical features of manual signs are hand configuration, palm orientation, place of articulation, and movement (Baker, van den Bogaerde, Pfau & Schermer, Reference Baker, van den Bogaerde, Pfau and Schermer2016; Brentari, Reference Brentari1998). Therefore, tracking of hands through space is important for the lexico-semantic processing of sign languages. Given that bimodal bilinguals and M2L2 learners are hearing adults who have experience with both spoken language (oral-auditory modality) and sign language (manual-visual modality), it is of prime interest to understand both the modality-specific and amodal aspects of language acquisition and neuroplasticity.

Studies on simultaneous bimodal bilingualism have delineated differences and similarities in speech and sign processing. Bimodal bilinguals (who acquired their two languages simultaneously) show greater activation in the bilateral parietal cortex and bilateral occipitotemporal cortex during single sign comprehension (Emmorey, Luk, Pyers & Bialystok, Reference Emmorey, McCullough, Mehta and Grabowski2014; Söderfeldt, Ingvar, Rönnberg, Eriksson, Serrander & Stone-Elander, Reference Söderfeldt, Ingvar, Rönnberg, Eriksson, Serrander and Stone-Elander1997). These areas are thought to be unique to the processing of sign languages, and have been activated by spatial classifiers and verbs in American Sign Language (ASL) in addition to other demanding visual processing tasks. Sequential bimodal bilinguals show additional involvement of regions associated with language control, such as the anterior cingulate and basal ganglia (Li et al., Reference Li, Abutalebi, Zou, Yan, Liu, Feng, Guo and Ding2015; Zou et al., Reference Zou, Ding, Abutalebi, Shu and Peng2012). However, the time course for developing such activation patterns in late learners is unclear. Moreover, little is known about the changes in processing that occur in the bimodal bilingual or M2L2 learner brain between the initial, naïve state of learning and a more proficient level of language knowledge. Perhaps learners attempt to process lexical signs as gestures, but move towards lexico-semantic processing with time (see Williams, Darcy & Newman, Reference Williams, Darcy and Newman2016). Additionally, there is behavioral evidence that late M2L2 learners of sign language often have difficulty acquiring the visual-manual characteristics of the target sign language. Specifically, not only do learners have difficulty acquiring sign language movement, but they also show an overreliance on the handshape of the sign (Bochner, Christie, Hauser & Searls, Reference Bochner, Christie, Hauser and Searls2011; Pichler, Reference Pichler, Mathur and Napoli2011; Grosvald, Lachaud & Corina, Reference Grosvald, Lachaud and Corina2012; Morford, Grieve-Smith, MacFarlane, Staley & Waters, Reference Morford, Grieve-Smith, MacFarlane, Staley and Waters2008; Morford & Carlson, Reference Morford and Carlson2011; Rosen, Reference Rosen2012; Schlehofer & Tyler, Reference Schlehofer and Tyler2016; Williams & Newman, Reference Williams and Newman2015, Reference Williams and Newman2016). Therefore, it may be that the ability to acquire an L2 sign language is based on the ability to have automatized processing in regions involved in spatial processing, biological motion, and hand processing, which subserve visuomotor phonetic processing.

A previous longitudinal neuroimaging study showed that M2L2 learners of ASL progressed stepwise through stages of lexical processing during approximately one year of university-level ASL instruction (Williams, Darcy & Newman, Reference Williams, Darcy and Newman2016). Before exposure to sign language the learners processed ASL lexical signs using non-linguistic (or at least phonetic) regions implicated in visuospatial and motor processing, with significant activation in occipital (e.g., calcarine sulcus) and parietal (e.g., superior parietal lobule) lobes. After one semester of exposure, the learners show greater cortical recruitment in supramarginal gyrus and putamen, suggesting that they transitioned into a phonological processing stage (Booth, Wood, Lu, Houk & Britan, Reference Booth, Wood, Lu, Houk and Britan2007). Subsequently, the learners transitioned into a lexico-semantic processing stage after 10 months of exposure, where they recruited more activation in the left inferior frontal gyrus. It is possible that the progression of lexical processing is different for those who are poor learners, which was not investigated in the aforementioned study. It could be posited that poor vocabulary learners perseverate on the modality-specific aspects of sign language (e.g., visuospatial and visuomotoric features), or do not fully automatize such processing routines, which may block or delay successful sign language acquisition.

1.2. Aims

Given the gap in knowledge about M2L2 sign language acquisition, the present study aimed to characterize the neural substrates of vocabulary acquisition and lexical sign processing in hearing M2L2 learners of American Sign Language (ASL) at early stages of acquisition. This study used the same set of learners in Williams et al. (Reference Williams, Darcy and Newman2016) to investigate this question more directly. Specifically, there were three main aims of the present study:

The first aim was to characterize the pattern of vocabulary acquisition across the first year of instruction. Not only has there not been a prior study that has examined the lexical acquisition trajectory of M2L2 learners in foreign language classrooms, which is theoretically motivated, but also it is meaningful from an applied perspective so that we can better understand whether acquisition rates change during certain semesters. The expectation, of course, was that students would acquire new lexical knowledge in a relatively equal stepwise fashion.

Second, since learners have little knowledge of sign language at the start of their L2 instruction, we aimed to characterize how limited vocabulary knowledge affects M2L2 sign language processing. The individual variability in brain response prior to learning may be predictive of future attainment. For example, we hypothesized that students who later have a smaller sign vocabulary process lexical signs as holistic visual objects instead of decomposable linguistic objects which is how we hypothesize that those who later develop a larger sign vocabulary. This difference in initial processing, again, is expected to be observed in the brain activation patterns and will impact later learning.

Our third aim was to characterize how gains in vocabulary knowledge affect neural processing after 10 months of L2 instruction. Specifically, we examined whether the poor vocabulary learners, while controlling for gender differences, persisted with the holistic processing strategy after one year of instruction, or whether their strategy had shifted. We hypothesized that perhaps continued processing of signs using areas dedicated to biological motion (e.g., temporoparietal and occipitotemporal) and hand (e.g., intraparietal sulcus, putamen) processing would suggest the use of relatively less efficient strategies that focus on decoding the movement. Furthermore, given the year-long exposure to lexical signs, we might expect greater phonological and/or lexico-semantic processing (e.g., inferior frontal gyrus (IFG); “Broca's area”) at T2 because previous studies have shown lower proficiency learners recruit greater activation in IFG when viewing lexical items (Grant et al., Reference Grant, Fang and Li2015).

By examining these questions, the present study was able to elucidate how vocabulary knowledge affects the neural processing of ASL, especially in poor learners. Theoretically, it was aimed at corroborating behavioral studies that have characterized deficits in M2L2 acquisition. From a practical perspective, if the neurocognitive profiles of poor vocabulary learners can be identified, which may be indicative of a global sign language deficit, then potential classroom interventions that aim to improve M2L2 acquisition can be developed to address those deficiencies. We aimed to answer these questions using a longitudinal design. First, changes in vocabulary knowledge over 10 months, or two semesters, of ASL exposure were tracked, which was included to address our first aim and to determine whether our subset of learners was representational of a larger sample. The pattern of neural activation was examined in response to viewing ASL lexical signs before and during their L2 instruction in a subset of 12 hearing adult learners of ASL. The use of a longitudinal neuroimaging design affords us the opportunity to examine vocabulary acquisition without the confounding factors that plague cross-sectional designs (e.g., different brains, language and scholastic experiences, instructors, coursework, etc.).

2.1. Materials and Methods

2.1.1. Participants

Thirty-four (male = 10) hearing English-speaking college students participated. All participants were right-handed according to the Edinburgh Handedness scale (M = 85.6, SD = 19.1). All participants reported English as their first language. The participants in this study were recruited from introductory American Sign Language (ASL) courses at Indiana University. The participants had little to no exposure to ASL (or any other spoken second language) before enrollment in the ASL course. They were tested at three different time points: before ASL instruction (T0); at the end of one semester of instruction (T1); and at the end of a second semester of instruction (T2). All participants gave written consent to perform the experimental tasks, which was approved by the Indiana University Institutional Review Board.

At baseline (T0) the 34 participants had a mean age of 20.6 (2.5). Participants were recruited during their first week of Beginning ASL I enrollment. On average, they had 1.06 (range = 0 – 5, SD = 1.49) hours of instruction. According to course instructors, the instruction in the first week of classes included introduction to the course, the target language and culture, but little linguistic instruction. Furthermore, most to all instruction was conducted in English during the first week. That is, these participants had minimal to no linguistic knowledge of the target language. Additionally, the participants rated their proficiency in both English and ASL on a scale from 1 to 7 (1 = “Almost None”, 2 = “Very Poor”, 3 = “Fair”, 4 = “Functional”, 5 = “Good”, 6 = “Very Good”, 7 = “Like Native”) for their understanding and fluency abilities. Their average scores across both categories were 7 (0) for English and 1.15 (0.31) for ASL.

After one semester (approximately 13 weeks later), the participants were brought back. Twenty-five participants (male = 7) returned (i.e., attrition of nine participants) for their first post-exposure follow-up (T1). On average they had 43.98 (1.12) hours of instruction. They rated their ASL ability as a 3.32 (1.05) and their English ability as 7 (0). Course grades were also recorded and they received an average of 91.12% (4.87) in their ASL course.

After a second semester of ASL training, 12 participants (male = 5) returned (i.e., attrition of 13 participants) for their second post-exposure session (T2). On average they had 89.5 (1.95) hours of instruction and rated their ASL ability as a 3.92 (0.63). At T2, participants had an average of 91.7% (4.64) in their second ASL course.

2.1.2 Procedure

2.1.2.1 Vocabulary Test

Participants took a speed vocabulary-translation test to obtain a gross measure of their vocabulary knowledge over time. The test was constructed by taking the signs from the current ASL textbooks across all four semesters of the current ASL curriculum (Smith et al., Reference Smith, Lentz and Mikos1988a, Reference Smith, Lentz and Mikos1988b, Reference Smith, Lentz and Mikos2008). Based on data retrieved from ASL-LEX (Caselli et al., Reference Caselli, Sehyr Sevcikova, Cohen-Goldberg and Emmorey2016), the signs included in this test were 142 signs total and were relatively high frequency (M = 4.59, SD = 1.16; very frequent = 7, infrequent = 1) and non-iconic (M = 2.29, SD = 1.69; very iconic = 7, arbitrary = 1). During the speeded vocabulary-translation task, participants viewed video clips of the signed words produced by a native signer only once (Figure 1). Participants were required to type in the English translation (or a guess) within five seconds. An automated procedure was used to score their translations for correct answers, including any synonyms (e.g., bathroom or restroom would be accepted for bathroom). A total score, correct out of 142, was used as the participants proficiency score.

Figure 1. Vocabulary task design.

2.1.2.2. Phoneme Categorization Task

The impact of vocabulary knowledge on L2 sign language processing was also tracked using functional magnetic resonance imaging (fMRI). The participants performed a phoneme categorization task, which lasted nine minutes and included 30 categorization trials in total. Participants viewed a native signer signing words with the speaker's full face and torso shown in front of a blue-gray backdrop. All stimuli (see Appendix) were high frequency monomorphemic signs from various word classes (i.e., nouns, verbs, adjectives; Caselli et al., Reference Caselli, Sehyr Sevcikova, Cohen-Goldberg and Emmorey2016). Signs were split into two groups: signs with place of articulation (i.e., location) on the head or face and signs with the location on the body, non-dominant hand, or neutral space (i.e., not on the face). Recall that place of articulation (or location) is a sublexical feature of sign language and is considered similar to a phoneme (Baker et al., Reference Baker, van den Bogaerde, Pfau and Schermer2016; Brentari, Reference Brentari1998; Sandler & Lillo-Martin, Reference Sandler and Lillo-Martin2006). As such, this task is classified as a phoneme classification task. The selection of face and not-face locations was to match for the bilabial (visible on the face) and non-bilabial (not visible on the face) task that was implemented in Williams et al. (Reference Williams, Darcy and Newman2015, Reference Williams, Darcy and Newman2016), while still being phonologically valid in ASL. Although the task itself was meaningless, it required participants to phonetically process the signs by splitting the signs into these two conditions, which the viewing of signs was thought to engage automatic lexical processing (similarly for audiovisual speech, see Campbell et al., Reference Campbell, MacSweeney, Surguladze, Calvert, McGuire, Brammer, David and Suckling2001).

The functional task was presented in an event-related design. For each trial a 500-millisecond fixation point was presented before the video appeared. Each stimulus video varied in duration (M = 1593.33, SD = 2.53 ms) and was followed by a jittered interstimulus interval (ISI range = 4000 – 8000, M = 6000 ms). Participants were told to press the right index finger for signs that were produced on the face, and to press the left index finger for signs that were produced on the body. They were instructed to make their responses as quickly and accurately as possible. In addition to the ISI, a 30 second fixation was presented at the beginning of the task and was used as a baseline.

2.1.3. Imaging Parameters

Participants underwent two scans using a 32-channel head coil and a Siemens 3 Tesla TIM Trio MRI scanner. The first scan was an anatomical T1-weighted scan used to co-register functional images. An MPRAGE sequence (160 sagittal slices; FOV = 256 mm, matrix = 256x256, TR = 2300 ms, TE = 2.91 ms, TI = 900 ms, flip angle = 9°, slice thickness = 1 mm, resulting in 1-mm × 1-mm × 1-mm voxels) was used. The remaining scans were the experimental functional multiband EPI scans (59 axial slices using the following protocol: field of view = 220 mm, matrix = 128x128, iPAT factor = 2, TR = 2000 ms, TE = 30 ms, flip angle = 60°, slice thickness = 3.8 mm, 0 gap).

2.1.4. Analysis Approach

Two different analysis methods (R software) were used to analyze the changes in vocabulary knowledge across the three time points. Due to significant amount of attrition, only 12 subjects attended all three sessions.Footnote 1 Given that there was attrition in the number of subjects over time, the statistical methods needed to account for missing data. A typical analysis of variance would not be appropriate for missing data, or different sample sizes, across each time point. Therefore, two methods that are robust to missing data were used. First, a predictive mean matching method (k = 5) was used, in which missing values from the attrited participants were imputed and accessed using pooled data across multiple regressions (Landerman, Land & Pieper, Reference Landerman, Land and Pieper1997). Including T1 was not only advantageous because it improves the imputation approach by contributing more data, but also it provides clearer understanding of vocabulary growth over time. Given some downsides to imputation (see Landerman et al., Reference Landerman, Land and Pieper1997), a linear mixed-effect model was also performed (Verbeke & Molenberghs, Reference Verbeke and Molenberghs2009). Both methods have their respective downsides when it comes to missing data; thus, any convergence of results between the two models as taken as an indicator of confidence. A correlation was performed between T0 and T2 scores with the vocabulary growth (VocabT2 – VocabT0) for the 12 recurrent subjects to investigate whether the change was due to performance at baseline or after sign language exposure.

Functional images were analyzed using SPM8 (Wellcome Imaging Department, University College, London, UK; freely available at http://fil.ion.ucl.ac.uk/spm). As mentioned previously, only 12 learners participated in all three sessions; thus, only 12 subjects were included in the multiple linear regression analysis of the BOLD responses to ASL lexical signs. Given that vocabulary scores were relatively the same between T1 and T2, we decided to only include T0 and T2 scanning sessions in the current analysis; additionally, T2 was thought to maximize the potential of finding neuroplastic effects given that it was approximately 10 months post-baseline compared to 3 months at T1 and had the highest amount of variability in vocabulary scores. During preprocessing images were corrected for slice acquisition timing, and resampled to 2 mm3 isovoxels, spatially smoothed with a Gaussian filter with a 4 mm3 FWHM kernel. All data were high-pass filtered at 1/128 Hz to remove low-frequency signals (e.g., linear drifts). Six-parameter rigid body motion correction was performed and motion parameters incorporated into the design matrix as nuisance regressors in the General Linear Model (GLM). Each participant's anatomical scan was aligned and normalized to the standardized SPM8 T1 template and then fMRI data were co-registered to high-resolution anatomical images.

At the individual (first) level, statistical analysis was performed using the standard GLM with Gaussian random fields in SPM8. The ASL stimulus onsets and durations were entered as our main regressors in the GLM in order to model the hemodynamic response function with stimulus events (Friston et al., 1995). BOLD signal from a common fixation baseline was subtracted from BOLD related to viewing ASL signs and was used as our estimated contrast. For the second level analysis on group data, multiple regression analyses were performed at each time point (T0 and T2) using each subject's ASL-Fixation contrast with each learner's respective vocabulary score at that time point entered as a covariate. Vocabulary score was the only covariant added to the model. Both positive and negative correlations were carried out. To correct for multiple comparisons, the image dimensions and the smoothing parameter of the processed data were entered into AFNI's 3dClustSim program in order to determine cluster sizes that would be significant given our voxel-wise p-value. Given the results of 5000 Monte Carlo simulations, both positive and negative regressions were performed using a voxel-wise p < 0.005, which was corrected to alpha < 0.05 with a cluster extent threshold of 62 voxels or moreFootnote 2.

Since vocabulary score was the main predictor in our multiple regression model, we wanted to make sure that there were no confounding factors, like gender or task performance, that could explain our results. It is possible that task difficulty could explain the results; however, it is only intuitive that there will be a significant correlation between vocabulary score and the learner's ability to classify sublexical features of sign language. If vocabulary score and task performance are correlated then following common procedures for dealing with collinearity in predictor variables, one will be excluded. In this case, we will exclude task performance because it does not directly address the current aims. If they are not correlated, then vocabulary will remain within the model.

Previous studies have also shown that women are better at acquiring second languages than men (van der Slik et al., Reference van der Slik, van Hout and Schepens2015). Therefore, we also analyzed whether our vocabulary scores differed based on gender using a point-biserial correlation at each time point separatelyFootnote 3. If there is a significant correlation with gender, then parameter estimates of the BOLD signal will be extracted from the significant clusters in our whole brain analysis by defining a 5-mm3 sphere at the center-of-mass for each significant cluster using Marsbar, an SPM toolbox (Brett et al., Reference Brett, Anton, Valabregue and Poline2002). Parameter estimates will be correlated with vocabulary scores while controlling for gender in an ad-hoc partial correlation. Only the clusters that survive the gender-correction will be considered significant in our analyses. This method allows us to control for cofounding factors while maintaining the integrity of our whole brain analysis.

The results from the Checking Multicollinearity section (2.2.2) will determine how the multiple regression analysis will be carried out with respect to confounding predictors.

2.2. Results

2.2.1. Vocabulary Growth

Figure 2 illustrates the average vocabulary score for the ASL learners at each time point. The regression results showing the size of vocabulary increases over time are presented in Table 1. Results from the pooled regressions after missing data was estimated with ten imputations found that the intercept coefficient has a value of 7.03 (SE = 2.11), T1 has a value of 35.9 (SE = 4.78), and T2 has a value of 40.4 (SE = 12.80). This indicates that relative to the baseline, there was a significantly larger increase in vocabulary scores at T1 (t = 7.52, p < 0.001), while T2 only further increased the outcome by around five lexical signs (t = 3.16, p < 0.05). T-values were large (greater than 2.0) after both semesters and had p-values less than 0.05, meaning the differences were statistically significant when comparing them to the baseline. Results from the linear mixed-effects model corroborated these findings. There was a significant increase relative to baseline for both T1 (t = 17.58, p < 0.001) and T2 (t = 17.09, p < 0.001).

Figure 2. Mean vocabulary scores at each time point averaged across all subjects.

Table 1. Vocabulary scores at each time point

No significant correlation was found between vocabulary growth and T0 gross vocabulary score [r = 0.218, p = 0.496], but there was a significant correlation between vocabulary growth and T2 gross vocabulary score [r = 0.918, p < 0.0001]. Therefore, those who had the largest increases in their score also had the largest T2 vocabulary, but the increase was not dependent on their baseline vocabulary score at T0.

2.2.2. Checking Multicollinearity

Task performance on the in-scanner phoneme categorization task was arcsine transformed and correlated with vocabulary score and gender at T0 and T2 separately. The analysis revealed trending or significant correlations between vocabulary score and performance at both T0 (r = 0.563, p = 0.057) and T2 (r = 0.989, p < 0.001). However, there was no significant effect of gender on task performance at either T0 (r = 0.083, p = 0.797) or T2 (r = −0.486, p = 0.055).

Point-biserial correlations between T0 vocabulary score and gender showed moderate negative correlation (r = −0.541) that was trending towards significance (p = 0.070), which means that those participants who reported as female tended to have higher vocabulary scores than those who reported as male. Similar correlations between T2 vocabulary and gender revealed a significant negative correlation (r = −0.714, p = 0.009).

These results revealed that phoneme categorization and vocabulary knowledge are significant correlated, especially after one year of sign language instruction; however, no correction for task performance will be used for the multiple linear regression analysis (in section 3.2.3) since they are adversely collinear and statistically represent the same amount of variability. On the other hand, gender did significantly influence vocabulary scores, where female learners outperformed male learners. As such, activation from significant clusters in our whole-brain multiple linear regression will be corrected for gender in our ad-hoc analysis.

3.2.3. Multiple Linear Regression

Figure 3 shows the increased activation in response to viewing ASL signs that is correlated with poor vocabulary knowledge. Results from the multiple regression analysis (see Table 2) indicated a negative correlation such that subjects with lower sign vocabulary knowledge at baseline (T0) had a significant increase in activation in bilateral inferior frontal gyrus, left insula, and left middle frontal gyrus. All of the clusters survived correction for gender differences in vocabulary knowledge. There was also increased activation in left anterior cingulate gyrus and the right anterior portion of the superior frontal gyrus; however, these clusters only trended towards significance once gender was controlled. There were no significant positive correlations with vocabulary score. In other words, there were not any regions that showed increased activation to viewing ASL signs that were positively correlated with vocabulary knowledge.

Figure 3. Activation shown here represents BOLD signal which is negatively correlated with vocabulary scores at baseline (T0; top) and at the second post-exposure scan (T2; bottom) for the subset (n = 12) of learners. Therefore, activation seen here is representative of increased processing in these regions in response to viewing ASL signs for those with poor vocabulary knowledge. Note: there were no significant correlations with vocabulary score at T1.

Table 2. Multiple regression analysis (p-corrected < 0.05; k = 62)

Note: †p<0.1, *p < 0.05, **p < 0.01, ***p < 0.001

Results from the multiple regression analysis at T2 showed that there were significant negative correlations in bilateral prefrontal cortex (including superior frontal, middle frontal, and inferior frontal gyri), the right posterior temporoparietal junction (around the superior temporal cortex), the right middle temporal gyrus and the left temporal pole. All of these correlations survived correction for gender except for the right inferior frontal gyrus, only trended towards significance. There were no significant positive correlations with vocabulary score at T2.

4. General discussion

The overall objective of the present longitudinal study was to broadly capture how second language (L2) signed vocabulary knowledge affects neural activation during the processing of ASL. Specifically, there were three main research aims. First, we wanted to characterize vocabulary acquisition over two semesters of instruction. Second, given that these learners had little knowledge of sign language at the start of their L2 instruction, one aim was to characterize how limited vocabulary knowledge affects processing of a novel second language. The last aim was to characterize how vocabulary knowledge affects neural processing after 10 months of L2 instruction; specifically, we were interested in how vocabulary knowledge modulated the difference in ASL processing at T0 and after two semesters of instruction (T2) in order to capture potentially diagnostic neurocognitive profiles of learners who struggle with acquiring sign. The results from the present study indicated that those students who had poor ASL vocabulary knowledge were associated with increased activation in regions that are commonly involved in lexico-semantic and phonological processing and decision making as well as modality-specific processing of multimodal integration, salience and biological motion processing. This pattern fits into our overall theoretical framework that nonnative signers struggle with visuomotoric properties, namely movement, that impede their ability to acquire sign language.

In order to characterize how very limited knowledge may impact vocabulary acquisition and processing, ASL learners who had relatively little knowledge of ASL at the beginning of L2 instruction were studied. Despite their limited knowledge, these learners varied in their lexicon size. It was hypothesized that learners with poor incoming vocabulary knowledge would show greater activation in regions associated with modality-specific visuomotoric phonetic processing and, perhaps, lexico-semantic processing for known signs that are phonologically related to the target sign (neighbors). The pattern of results from baseline measurements (T0) confirmed our predictions, where increased activation in bilateral inferior frontal gyrus (IFG), left insula and anterior cingulate cortex (ACC), left middle frontal gyrus, and the anterior portion of the right superior frontal gyrus was observed. We will speculate on the role of each activated region in turn.

The left anterior insula has been implicated in language processing (Ardila, Bernal, & Rosselli, Reference Ardila, Bernal and Rosselli2014). More generally, however, the activation of the left anterior insula may be more indicative of increased difficulty in cross-modal multisensory integration (Allen et al., Reference Allen, Emmorey, Bruss and Damasio2008; Kurth et al., Reference Kurth, Zilles, Fox, Laird and Eickhoff2010). Allen and colleagues argue that the insula's connectivity to sensory cortices lends itself to multisensory integration, especially for sign language which requires visual-tactile-vestibular integration. Similarly, co-activation of the frontoinsular cortex and the ACC point to increased salience processing (Seeley et al., Reference Seeley, Menon, Schatzberg, Keller, Glover, Kenna, Reiss and Greicius2007; Uddin, Reference Uddin2015), where the insula is thought to be important for interoceptive and viceromotor body processing. The neural signal from frontoinsular cortex flows to the central executive network, including the ACC, which initiates decision-making (Uddin, Reference Uddin2015). High levels of activation in the anterior cingulate and other prefrontal regions suggests that these learners also required greater effort in areas of decision making and control (Allman et al., Reference Allman, Hakeem, Erwin, Nimchinsky and Hof2001; Cohen et al., Reference Cohen, Botvinick and Carter2000; Sohn et al., Reference Sohn, Albert, Jung, Carter and Anderson2007) and perhaps the need to cope with greater task demands (Burgess et al., Reference Burgess, Gilbert and Dumontheil2007). As such, it can be hypothesized that learners with smaller vocabulary sizes require more neural resources in order to integrate the visual and motoric salience of the signs so that the phonological content can be determined; therefore, poor vocabulary knowledge is likely linked to less efficient visuomotor phonetic perception, which requires greater neural resources.

Support for poor visuomotor phonetic perception is indicated by the distributed activation in the frontal lobe. The anterior portion of the superior frontal gyrus has been implicated in pseudosign recognition in hearing nonnative signers (Emmorey, Xu, & Braun, Reference Emmorey, Xu and Braun2011). Additionally, activation in the middle frontal gyrus corresponds to spatial judgements and perspective-taking (Kaiser et al., Reference Kaiser, Walther, Nennig, Kronmüller, Mundt, Weisbrod, Stippich and Vogeley2008; Smith et al., Reference Smith, Davis, Niu, Healy, Bonilha, Fridriksson and Rorden2010) and activation in bilateral inferior frontal gryi are implicated in movement imitation (Corina & Knapp, 2006; Newman-Norlund et al., Reference Newman-Norlund, van Schie, van Zuijlen and Bekkering2007). Particularly, left IFG has also been implicated in phonological processing of the hand in sign language (Corina, Reference Corina1999).

Together, these findings suggest that sign language learners with poorer vocabulary knowledge, and perhaps less exposure to sign language, require greater activation in regions involved in multimodal integration, salience, visuospatial and motor phonological processing, and decision making. This pattern of activation might be a predictor of M2L2 deficits in vocabulary acquisition later down the line. This predictive power must be tested in future experiments, but we can evaluate its potential validity by examining activation after one year of exposure.

After two semesters of L2 instruction, or about 10 months, there was a significant increase in the learners’ vocabulary scores. The study also aimed to investigate whether there was a change in the neural substrates recruited for those learners with poor vocabulary knowledge. In fact, the overall pattern of activation was similar at T2 insofar as we observed recruitment of prefrontal and frontal cortex. This means that learners with poor vocabulary scores, even after 10 months of exposure, required greater neural resources when viewing ASL signs compared to those with higher vocabulary scores. Particularly, the left IFG was activated. Left IFG has been implicated in the selection of semantic information, where increased activation is representative of more effortful processing and increased difficulty in selection among competing information (Fiez, Reference Fiez1997; Sakai, Reference Sakai2005; Thompson-Schill et al., Reference Thompson-Schill, D'Esposito, Aguirre and Farah1997; Vigneau et al., Reference Vigneau, Beaucousin, Herve, Duffau, Crivello, Houde, Mazoyer and Tzourio-Mazoyer2006). Right IFG has also been shown to be activated during word retrieval when more processing is needed, including bilinguals (Blasi et al., Reference Blasi, Young, Tansy, Petersen, Snyder and Corbetta2002; Marian et al., Reference Marian, Spivey and Hirsch2003, Reference Marian, Shildkrot, Blumenfeld, Kaushanskaya, Faroqi-Shah and Hirsch2007). Previous studies on L2 vocabulary acquisition have also shown increased grey matter volume in left IFG for high proficiency learners, suggesting more controlled automatic lexical processing (Stein et al., Reference Stein, Federspiel, Koenig, Wirth, Strik, Wiest and Dierks2012). Increased functional activation was seen in IFG for low-proficiency learners relative to high proficiency learners (Ishikawa & Wei, Reference Ishikawa and Wei2009). There was also differential activation at T2 in the temporal lobes. For instance, there was additional bilateral temporal lobe activation. Left anterior temporal pole activation has also been shown to be involved in the semantic network (Price, Reference Price2010). As such, it can be argued that there was greater recruitment needed for word retrieval for these learners with poor vocabulary knowledge. Greater word retrieval difficulties may be due to poor phonological processing more generally.

There is a consistent relationship between L2 phonological processing skills and vocabulary acquisition in spoken languages (Bundgaard-Nielsen, Best, Koors & Tyler, Reference Bundgaard-Nielsen, Best, Kroos and Tyler2012; Bundgaardd-Nielsen, Best & Tyler, Reference Bundgaard-Nielsen, Best and Tyler2011a,Reference Bundgaard-Nielsen, Best and Tylerb; Darcy, Park & Yang, Reference Darcy, Park and Yang2015). In other words, previous studies have shown that L2 learners’ vocabulary size expands as a function of being able to make phonological contrasts in their L2. Given this relationship, it can be hypothesized that the aforementioned deficits in the M2L2 population might be tied to poor phonological processing, especially given the activation in left IFG. Studies of both spoken and signed language have shown left IFG is related to phonological processing (Corina, Reference Corina1999; Corina & Knapp, 2006; Emmorey, Reference Emmorey and Arthur2015; Heim et al., Reference Heim, Eickhoff, Ischebeck, Friederici, Stephan and Amunts2009; Vigneau et al., Reference Vigneau, Beaucousin, Herve, Duffau, Crivello, Houde, Mazoyer and Tzourio-Mazoyer2006; Williams et al., Reference Williams, Darcy and Newman2015). For example, Corina (Reference Corina1999) found that left inferior frontal areas are important for recognition of bracheomanual articulation of signs. Moreover, Emmorey and colleagues (Reference Emmorey, Giezen and Gollan2015) similarly found that left IFG is important for complex manual movements. Activation in the supplementary motor area suggest potential motor simulation for bimanual coordination and movement sequence timing (Serrien et al., Reference Serrien, Strens, Oliviero and Brown2002; Shima & Tanji, Reference Shima and Tanji1998). Therefore, after more experience with signs, those learners who have poor vocabulary knowledge also required greater phonological activation when viewing ASL signs. This may be a result of poor modality-specific (i.e., hand and motion) processing that subserves sign language phonology contrasts.

Right hemispheric recruitment of the cortex surrounding temporoparietal junction has been implicated in hand processing and biological motion (see Puce & Perrett, Reference Puce and Perrett2003 for a review). Such an activation pattern may indicate difficulty in acquisition of a visual phonology. Difficulty to acquire visual phonology has been reported several times in L2 learners of sign language (Bochner et al., Reference Bochner, Christie, Hauser and Searls2011; Grosvald et al., Reference Grosvald, Lachaud and Corina2012; Morford et al., Reference Morford, Grieve-Smith, MacFarlane, Staley and Waters2008; Morford & Carlson, Reference Morford and Carlson2011; Pichler, Reference Pichler, Mathur and Napoli2011; Rosen, Reference Rosen2012; Schlehofer & Tyler, Reference Schlehofer and Tyler2016; Williams & Newman, Reference Williams and Newman2015, Reference Williams, Darcy and Newman2016). Behavioral data has shown that L2 learners of sign language often have more difficulty in processing the handshape and movement phonological parameters relative to other parameters (Bochner et al., Reference Bochner, Christie, Hauser and Searls2011; Morford & Carlson, Reference Morford and Carlson2011; Williams & Newman, Reference Williams and Newman2015, Reference Williams and Newman2016). Movement is important for sign acquisition because the phonological sequencing of the syllable (i.e., sonority) is directly related to movement, which is the syllable nucleus (e.g., Brentari, Reference Brentari1998). Therefore, poor inability to process the visuomotor phonetic properties of signs, particularly movement, hinder the construction of legal syllable structure, potentially preventing consolidation into or the retrieval from lexical memory. Here, we may have some of the first neural evidence that difficulty with biological motion processing, as indicated by increased activation, which underlies the phonetic and phonological foundation of a visual language, might contribute to poor sign language vocabulary acquisition.

Taken together, the results of the present study revealed that second language learners require activation of modality-independent neural substrates for lexico-semantic processing, such as the inferior frontal gyrus and temporal pole. Additionally, the results from the present study indicated that L2 learners of sign language require automatized processing in areas involved in multimodal integration, salience, biological motion processing and motor simulation. This is the first longitudinal neuroimaging study that has investigated modality–independent and –dependent mechanisms for second language acquisition of sign language in relation to measure of proficiency (see Williams et al., Reference Williams, Darcy and Newman2016 for measures related to length of exposure). Therefore, this study provides additional neural evidence that second language sign language proficiency (via lexical knowledge) rests on the ability to acquire and process visual phonology. It should be noted that the present study was conducted on a small sample of 12 learners, despite their relative representation of a larger group of learners. Future studies will need to be conducted to examine whether this is a reliable effect and whether these profiles are truly diagnostic of future, perhaps long-term, deficits in M2L2 acquisition.

Appendix

Table A.1. Words included in the fMRI task

Footnotes

*Supported by the National Science Foundation (NSF) Graduate Research Fellowship #1342962 (JTW). Funding also provided by the Indiana University Imaging Research Facility Brain Scan Credit Program (JTW, ID & SDN).

1 All 12 learners (male = 5) were right-handed according to the Edinburg Handedness scale (M = 87.5, SD = 19.1) with a mean age of 20 (1.7). All participants reported English as their first language. On average they had 1.06 (SD = 1.49), 44.12 (SD = 1.00), and 89.5 (SD = 1.95) hours of instruction at T0, T1, and T2, respectively. These learners self-rated their ASL ability as 1.21 (SD = 0.37), 3.12 (SD = 0.82), and 3.92 (SD = 0.63) at T0, T1, and T2, respectively. The learners also had an average of 90.8% (SD = 4.26) and 91.7% (SD = 4.64) in their ASL course at T1 and T2, respectively. The mean number of vocabulary signs known for this subset of learners was 8.08 (SD = 3.3), 46.06 (SD = 9.5), and 55.8 (SD = 9.2) at T0, T1, and T2, respectively. These values (i.e., age, handedness, hours of exposure, grades, or self-rating) are consistent with the overall sample descriptive statistics thus they suggest that this subset of learners is treated as a representative sample of the larger L2 ASL learners at T0 and T1.

2 These clusters happened to also pass family-wise error (FWE) p<0.05 cluster correction implemented in SPM8.

3 Vocabulary scores were also compared between genders using a Wilcoxon rank-sum test, which corroborated the results from the correlation analysis (T0: Z = −1.8, p = 0.072; T2: Z = −2.2, p = 0.028).

References

Abutalebi, J. (2008). Neural aspects of second language representation and language control. Acta Psychologica, 128 (3), 466478.Google Scholar
Allen, J. S., Emmorey, K., Bruss, J., & Damasio, H. (2008). Morphology of the insula in relation to hearing status and sign language experience. The Journal of Neuroscience, 28 (46), 1190011905.Google Scholar
Allman, J. M., Hakeem, A., Erwin, J. M., Nimchinsky, E., & Hof, P. (2001). The anterior cingulate cortex. Annals of the New York Academy of Sciences, 935 (1), 107117.Google Scholar
Ardila, A., Bernal, B., & Rosselli, M. (2014). Participation of the insula in language revisited: a meta-analytic connectivity study. Journal of Neurolinguistics, 29, 3141.Google Scholar
Baker, A., van den Bogaerde, B., Pfau, R., & Schermer, T. (Eds.). (2016). The Linguistics of Sign Languages: An introduction. Amsterdam: John Benjamins Publishing Company.CrossRefGoogle Scholar
Blasi, V., Young, A. C., Tansy, A. P., Petersen, S. E., Snyder, A. Z., & Corbetta, M. (2002). Word retrieval learning modulates right frontal cortex in patients with left frontal damage. Neuron, 36 (1), 159170.Google Scholar
Bochner, J. H., Christie, K., Hauser, P. C., & Searls, J. M. (2011). When is a difference really different? Learners’ discrimination of linguistic contrasts in American Sign Language. Language Learning, 61 (4), 13021327.Google Scholar
Booth, J.R., Wood, L., Lu, D., Houk, J.C., & Britan, T. (2007). The role of the basal ganglia and the cerebellum in language processing. Brain Research, 1133, 136144.Google Scholar
Brentari, D. (1998). A prosodic model of sign language phonology. Cambridge: MIT Press.Google Scholar
Brett, M., Anton, J. L., Valabregue, R., & Poline, J. B. (2002). Region of interest analysis using the MarsBar toolbox for SPM 99. Neuroimage, 16 (2), S497.Google Scholar
Bundgaard-Nielsen, R.L., Best, C.T., & Tyler, M.D. (2011a). Vocabulary size matters: The assimilation of second-language Australian English vowels to first-language Japanese vowel categories. Applied Psycholinguistics, 32, 5167.Google Scholar
Bundgaard-Nielsen, R.L., Best, C.T., & Tyler, M.D. (2011b). Vocabulary size associated with second-language vowel perception performance in adult learners. Studies in Second Language Acquisition, 33 (3), 433461.Google Scholar
Bundgaard-Nielsen, R.L., Best, C.T., Kroos, C., & Tyler, M.D. (2012). Second language learners’ vocabulary expansion is associated with I proved second language vowel intelligibility. Applied Psycholinguistics, 33 (3), 643664.Google Scholar
Burgess, P. W., Gilbert, S. J., & Dumontheil, I. (2007). Function and localization within rostral prefrontal cortex (area 10). Philosophical Transactions of the Royal Society B: Biological Sciences, 362 (1481), 887899.Google Scholar
Campbell, R., MacSweeney, M., Surguladze, S., Calvert, G.A, McGuire, P.K, Brammer, M.J, David, A.S, & Suckling, J. (2001). Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning). Cognitive Brain Research. 12, 233243.Google Scholar
Caselli, N., Sehyr Sevcikova, Z., Cohen-Goldberg, A., & Emmorey, K. (2016). ASL-Lex: A Lexical Database for American Sign Language. Behavioral Research Methods, 118.Google Scholar
Cohen, J. D., Botvinick, M., & Carter, C. S. (2000). Anterior cingulate and prefrontal cortex: who's in control?. Nature Neuroscience, 3, 421423.CrossRefGoogle ScholarPubMed
Corina, D. P. (1999). Neural disorders of language and movement: Evidence from American Sign Language. Gesture, Speech, and Sign, 2744.Google Scholar
Corina, D. P., & Knapp, H. P. (2006). Psycholinguistic and neurolinguistic perspectives on sign languages. Handbook of psycholinguistics, 2, 10011024.Google Scholar
Darcy, I., Park, H., & Yang, C. L. (2015). Individual differences in L2 acquisition of English phonology: The relation between cognitive abilities and phonological processing. Learning and Individual Differences, 40, 6372.Google Scholar
Emmorey, K. (2015). The Neurobiology of Sign Language. Arthur, W. Toga (Ed.), Academic Press: Waltham, 475479.Google Scholar
Emmorey, K., Xu, J., & Braun, A. (2011). Neural responses to meaningless pseudosigns: evidence for sign-based phonetic processing in superior temporal cortex. Brain and Language, 117 (1), 3438.Google Scholar
Emmorey, K., & McCullough, S. (2009). The bimodal bilingual brain: Effects of sign language experience. Brain and Language, 109 (2), 124132.Google Scholar
Emmorey, K., Giezen, M. R., & Gollan, T. H. (2015). Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Bilingualism: Language and Cognition, 1–20.Google Scholar
Emmorey, K., Grabowski, T., McCullough, S., Ponto, L. L., Hichwa, R. D., & Damasio, H. (2005). The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. Neuroimage, 24 (3), 832840.Google Scholar
Emmorey, K., McCullough, S., Mehta, S., & Grabowski, T. J. (2014). How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Frontiers in Psychology, 5.Google Scholar
Fiez, J. A. (1997). Phonology, semantics, and the role of the left inferior prefrontal cortex. Human Brain Mapping, 5 (2), 7983.Google Scholar
Friston, K. J., Holmes, A. P., Poline, J. B., Grasby, P. J., Williams, S. C. R., Frackowiak, R. S., & Turner, R. (1995). Analysis of fMRI time-series revisited. Neuroimage, 2 (1), 4553.Google Scholar
Grant, A. M., Fang, S. Y., & Li, P. (2015). Second language lexical development and cognitive control: A longitudinal fMRI study. Brain and language, 144, 3547.Google Scholar
Grosvald, M., Lachaud, C., & Corina, D. (2012). Handshape monitoring: Evaluation of linguistic and perceptual factors in the processing of American Sign Language. Language and Cognitive Processes, 27 (1), 117141.Google Scholar
Heim, S., Eickhoff, S. B., Ischebeck, A. K., Friederici, A. D., Stephan, K. E., & Amunts, K. (2009). Effective connectivity of the left BA 44, BA 45, and inferior temporal gyrus during lexical and phonological decisions identified with DCM. Human Brain Mapping, 30 (2), 392402.Google Scholar
Ishikawa, S. I., & Wei, Q. (2009). Brain imaging for SLA research: An fMRI study of L2 learners' different levels of word semantic processing. Brain Topography and Multimodal Imaging, 4144.Google Scholar
Kaiser, S., Walther, S., Nennig, E., Kronmüller, K., Mundt, C., Weisbrod, M., Stippich, C., & Vogeley, K. (2008). Gender-specific strategy use and neural correlates in a spatial perspective taking task. Neuropsychologia, 46 (10), 25242531.Google Scholar
Kroll, J. F., Dussias, P. E., Bice, K., & Perrotti, L. (2015). Bilingualism, Mind, and Brain. Annual Review of Linguistics, 1 (1), 377394.Google Scholar
Kurth, F., Zilles, K., Fox, P. T., Laird, A. R., & Eickhoff, S. B. (2010). A link between the systems: functional differentiation and integration within the human insula revealed by meta-analysis. Brain Structure and Function, 214 (5-6), 519534.Google Scholar
Landerman, L. R., Land, K. C., & Pieper, C. F. (1997). An empirical evaluation of the predictive mean matching method for imputing missing values. Sociological Methods & Research, 26 (1), 333.Google Scholar
Li, L., Abutalebi, J., Zou, L., Yan, X., Liu, L., Feng, X., Guo, T., & Ding, G. (2015). Bilingualism alters brain functional connectivity between “control” regions and “language” regions: Evidence from bimodal bilinguals. Neuropsychologia, 71, 236247.CrossRefGoogle ScholarPubMed
Marian, V., Shildkrot, Y., Blumenfeld, H. K., Kaushanskaya, M., Faroqi-Shah, Y., & Hirsch, J. (2007). Cortical activation during word processing in late bilinguals: similarities and differences as revealed by functional magnetic resonance imaging. Journal of Clinical and Experimental Neuropsychology, 29 (3), 247265.Google Scholar
Marian, V., Spivey, M., & Hirsch, J. (2003). Shared and separate systems in bilingual language processing: Converging evidence from eyetracking and brain imaging. Brain and language, 86 (1), 7082.Google Scholar
Morford, J. P., & Carlson, M. L. (2011). Sign perception and recognition in non-native signers of ASL. Language Learning and Development, 7 (2), 149168.Google Scholar
Morford, J. P., Grieve-Smith, A. B., MacFarlane, J., Staley, J., & Waters, G. (2008). Effects of language experience on the perception of American Sign Language. Cognition, 109 (1), 4153.Google Scholar
Newman-Norlund, R. D., van Schie, H. T., van Zuijlen, A. M., & Bekkering, H. (2007). The mirror neuron system is more active during complementary compared with imitative action. Nature Neuroscience, 10 (7).Google Scholar
Perani, D., & Abutalebi, J. (2005). The neural basis of first and second language processing. Current Opinion in Neurobiology, 15 (2), 202206.CrossRefGoogle ScholarPubMed
Pichler, C. D. (2011). Sources of handshape error in first-time signers of ASL. In Mathur, G. & Napoli, D. J. (Eds.), Deaf around the world: The impact of language (pp. 96121). Oxford: Oxford University Press.Google Scholar
Price, C. J. (2010). The anatomy of language: a review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences, 1191 (1), 6288.CrossRefGoogle Scholar
Puce, A., & Perrett, D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 358 (1431), 435445.Google Scholar
Rosen, R. (2012). Beginning L2 Production Errors in ASL Lexical Phonology. Sign Language and Linguistics, 7, 3161.CrossRefGoogle Scholar
Saidi, L. G., Perlbarg, V., Marrelec, G., Pélégrini-Issac, M., Benali, H., & Ansaldo, A. I. (2013). Functional connectivity changes in second language vocabulary learning. Brain and Language, 124 (1), 5665.CrossRefGoogle Scholar
Sakai, K. L. (2005). Language acquisition and brain development. Science, 310 (5749), 815819.Google Scholar
Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge: Cambridge University Press.Google Scholar
Schlehofer, D., & Tyler, I.J. (2016). Errors in second language learners’ production of phonological contrasts in American Sign Language. Iinternational Journal of Language and Linguistics, 3 (2), 3038.Google Scholar
Seeley, W. W., Menon, V., Schatzberg, A. F., Keller, J., Glover, G. H., Kenna, H., Reiss, A.L., & Greicius, M. D. (2007). Dissociable intrinsic connectivity networks for salience processing and executive control. The Journal of Neuroscience, 27 (9), 23492356.Google Scholar
Serrien, D. J., Strens, L. H., Oliviero, A., & Brown, P. (2002). Repetitive transcranial magnetic stimulation of the supplementary motor area (SMA) degrades bimanual movement control in humans. Neuroscience Letters, 328 (2), 8992.Google Scholar
Shima, K., & Tanji, J. (1998). Role for cingulate motor area cells in voluntary movement selection based on reward. Science, 282 (5392), 13351338.Google Scholar
Smith, D. V., Davis, B., Niu, K., Healy, E. W., Bonilha, L., Fridriksson, J., & Rorden, C. (2010). Spatial attention evokes similar activation patterns for visual and auditory stimuli. Journal of Cognitive Neuroscience, 22 (2), 347361.Google Scholar
Smith, C., Lentz, E., & Mikos, K.P. (1988a). Signing Naturally Level 1. San Diego, California: DawnSignPress.Google Scholar
Smith, C., Lentz, E., & Mikos, K.P. (1988b). Signing Naturally Level 2. San Diego, California: DawnSignPress.Google Scholar
Smith, C., Lentz, E., & Mikos, K.P. (2008). Signing Naturally. San Diego, California: DawnSignPress.Google Scholar
Söderfeldt, B., Ingvar, M., Rönnberg, J., Eriksson, L., Serrander, M., & Stone-Elander, S. (1997). Signed and spoken language perception studied by positron emission tomography. Neurology, 49 (1), 8287.Google Scholar
Sohn, M. H., Albert, M. V., Jung, K., Carter, C. S., & Anderson, J. R. (2007). Anticipation of conflict monitoring in the anterior cingulate cortex and the prefrontal cortex. Proceedings of the National Academy of Sciences, 104 (25), 1033010334.Google Scholar
Stein, M., Dierks, T., Brandeis, D., Wirth, M., Strik, W., & Koenig, T. (2006). Plasticity in the adult language system: A longitudinal electrophysiological study on second language learning. Neuroimage, 33 (2), 774783.Google Scholar
Stein, M., Federspiel, A., Koenig, T., Wirth, M., Strik, W., Wiest, R., & Dierks, T. (2012). Structural plasticity in the language system related to increased second language proficiency. Cortex, 48 (4), 458465.Google Scholar
Thompson-Schill, S. L., D'Esposito, M., Aguirre, G. K., & Farah, M. J. (1997). Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proceedings of the National Academy of Sciences, 94 (26), 1479214797.Google Scholar
Uddin, L. Q. (2015). Salience processing and insular cortical function and dysfunction. Nature Reviews Neuroscience, 16 (1), 5561.Google Scholar
van der Slik, F. W., van Hout, R. W., & Schepens, J. J. (2015). The gender gap in second language acquisition: Gender differences in the acquisition of Dutch among immigrants from 88 countries with 49 mother tongues. PloS One, 10 (11), e0142056.Google Scholar
van Hell, J. G., & Tanner, D. (2012). Second Language Proficiency and Cross-Language Lexical Activation. Language Learning, 62 (2), 148171.Google Scholar
Verbeke, G., & Molenberghs, G. (2009). Linear mixed models for longitudinal data. Springer Science & Business Media.Google Scholar
Vigneau, M., Beaucousin, V., Herve, P. Y., Duffau, H., Crivello, F., Houde, O., Mazoyer, N., & Tzourio-Mazoyer, N. (2006). Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing. Neuroimage, 30 (4), 14141432.Google Scholar
Williams, J.T., & Newman, S.D. (2015). Interlanguage dynamics and lexical networks in nonnative L2 signers of ASL: Cross-modal rhyme priming. Bilingualism: Language and Cognition, 118. doi: 10.1017/S136672891500019XGoogle Scholar
Williams, J.T., & Newman, S.D. (2016). Phonological substitution errors in L1 ASL sentence processing by hearing M2L2 learners. Second Language Research, 32 (3), 347366. doi: 10.1177/0267658315626211Google Scholar
Williams, J.T., Darcy, I., & Newman, S.D. (2015). Modality-independent neural mechanisms for novel phonetic processing. Brain Research, 1620, 107115. doi:10.1016/j.brainres.2015.05.014Google Scholar
Williams, J.T., Darcy, I., & Newman, S.D. (2016). Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: a longitudinal study. Cortex, 75, 5667. doi: 10.1016/j.cortex.2015.11.015.Google Scholar
Zou, L., Ding, G., Abutalebi, J., Shu, H., & Peng, D. (2012). Structural plasticity of the left caudate in bimodal bilinguals. Cortex, 48 (9), 11971206Google Scholar
Figure 0

Figure 1. Vocabulary task design.

Figure 1

Figure 2. Mean vocabulary scores at each time point averaged across all subjects.

Figure 2

Table 1. Vocabulary scores at each time point

Figure 3

Figure 3. Activation shown here represents BOLD signal which is negatively correlated with vocabulary scores at baseline (T0; top) and at the second post-exposure scan (T2; bottom) for the subset (n = 12) of learners. Therefore, activation seen here is representative of increased processing in these regions in response to viewing ASL signs for those with poor vocabulary knowledge. Note: there were no significant correlations with vocabulary score at T1.

Figure 4

Table 2. Multiple regression analysis (p-corrected < 0.05; k = 62)

Figure 5

Table A.1. Words included in the fMRI task