Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T12:15:24.409Z Has data issue: false hasContentIssue false

Anticipation processes in L2 speech comprehension: Evidence from ERPs and lexical recognition task*

Published online by Cambridge University Press:  29 July 2015

ALICE FOUCART*
Affiliation:
Center for Brain and Cognition, University Pompeu Fabra, carrer Roc Boronat, 138, 08018 Barcelona, Spain
ELISA RUIZ-TADA
Affiliation:
Center for Brain and Cognition, University Pompeu Fabra, carrer Roc Boronat, 138, 08018 Barcelona, Spain
ALBERT COSTA
Affiliation:
Center for Brain and Cognition, University Pompeu Fabra, carrer Roc Boronat, 138, 08018 Barcelona, Spain ICREA, Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
*
Address for correspondence: Dr. Alice Foucart Universitat Pompeu Fabra Department of Technology (room 55116) Roc Boronat, 138, 08018 BarcelonaSpainalfoucart@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

The present study investigated anticipation processes in L2 speech comprehension. French–Spanish late bilinguals were presented with high-constrained Spanish sentences. ERPs were time-locked on the article preceding the critical noun, which was muted to avoid overlapping effects. Articles that mis-matched the gender of the expected nouns triggered a negativity. A subsequent lexical recognition task revealed that words expected from the context were (falsely) recognised significantly more often than unexpected words, even though all were muted. Overall, the results suggest that anticipation processes are at play during L2 speech processing, and allow creating a memory trace of a word prior to presentation.

Type
Research Notes
Copyright
Copyright © Cambridge University Press 2015 

Speech processing appears to be easy and automatic in a first language (L1), but it becomes more challenging when it comes to a second language (L2). Many studies have shown that (late) bilinguals encounter difficulties at different levels when processing L2 speech. For example, lexical access can be altered by phoneme perception (Pallier, Colomé & Sebastián-Gallés, Reference Pallier, Colomé and Sebastián-Gallés2001), speech segmentation is affected by L1 routines (Cutler, A., Mehler, J., Norris, D. & Segui, Reference Cutler, Mehler, Norris, Segui and Altmann2002) and speech-in-noise comprehension is reduced in L2 compared to L1 (Hervais-Adelman, Pefkou & Golestani, Reference Hervais-Adelman, Pefkou and Golestani2014). These phonological and acoustic impairments, in addition to the speed at which speech unfolds, render listening comprehension less fluent and effortful in L2. To compensate for their difficulties, late bilinguals seem to develop strategies such as relying more on visual cues than native speakers, for instance (Navarra & Soto-Faraco, Reference Navarra and Soto-Faraco2007; Wang, Xiang, Vannest, Holroyd, Narmoneva, Horn, Liu, Rose, deGrauw & Holland, Reference Wang, Xiang, Vannest, Holroyd, Narmoneva, Horn, Liu, Rose, deGrauw and Holland2011). Similarly, given that disfluent comprehension can affect anticipation processes (Corley, MacGregor & Donaldson, Reference Corley, MacGregor and Donaldson2007; MacGregor, Corley & Donaldson, Reference MacGregor, Corley and Donaldson2010), the effort of comprehending L2 speech, making L2 comprehension less fluent than in L1, may reduce (or even suppress) the use of processes that would take place during L1 sentence listening, such as anticipation processes. In the present paper we investigate whether late bilinguals can make online use of the sentence context to anticipate words during sentence listening.

Previous studies have revealed sentence context effects in late bilinguals, reflected by facilitation in word recognition (e.g., Chambers & Cooke, Reference Chambers and Cooke2009; Fitzpatrick & Indefrey, Reference Fitzpatrick and Indefrey2009; Lagrou, Hartsuiker & Duyck, Reference Lagrou, Hartsuiker and Duyck2012). For example, Chambers and Cooke (Reference Chambers and Cooke2009) presented English–French bilinguals with a display containing inter-lingual competitors (e.g., ‘chicken’ [‘poule’ /pul/ in French] and ‘pool’ /pul/) and unrelated distracters. The recording of the eye-movements revealed that bilinguals fixated more the inter-lingual competitor when it was possible within the preceding sentence context (e.g., ‘Marie va décrire la poule’, Marie will describe the chicken) than when it was not (e.g., ‘Marie va nourrir la poule’, Marie will feed the chicken). Similarly, FitzPatrick and Indefrey (Reference Fitzpatrick and Indefrey2010) concluded from an ERP study that cross-lingual effects are reduced in semantically constraining contexts. Their results showed a similar N400 component (an ERP component reflecting semantic integration difficulty) for semantically incongruent critical words, independently of whether the L1 translation of the L2 words had initial phonological overlap or not (e.g., ‘My Christmas present came in a bright-orange doughnut’; initial overlap with ‘doos’ meaning ‘box’ in Dutch). These conclusions are consistent with the more recent findings from Lagrou et al. (Reference Lagrou, Hartsuiker and Duyck2012) showing that inter-lingual homophone effects are reduced in semantically constrained sentences.

These studies converge in that they all show that word recognition is facilitated by the use of sentence context in L2 sentence listening. However, they do not allow differentiating whether sentence context facilitates word integration or word anticipation. As revealed by the ERP literature in L1, words that are expected from the sentence context provoke a reduced N400 effect (Kutas & Hillyard, Reference Kutas and Hillyard1980). This reduction can reflect facilitated integration of a word upon its presentation because it matches the semantic network activated by the sentence context (Kuperberg, Paczynski & Ditman, Reference Kuperberg, Paczynski and Ditman2011; Myers & O’Brien, Reference Myers and O’Brien1998; Paczynski & Kuperberg, Reference Paczynski and Kuperberg2012); alternatively, this reduction can reflect active processes allowing the anticipation of upcoming words prior to their presentation (DeLong, Urbach & Kutas, Reference DeLong, Urbach and Kutas2005; Neely, Reference Neely1977; see Lau, Holcomb & Kuperberg, Reference Lau, Holcomb and Kuperberg2013, for an extensive discussion on the debate). Importantly, the integration and anticipation processes are not incompatible, and actually most probably equally take place during sentence comprehension. Nevertheless, it is likely that comprehenders may rely more on one or the other depending on the complexity of the comprehension situation. One might say that the simple fact of processing an L2 already puts late bilinguals in a complex linguistic situation and may already affect anticipation processes. This assumption was discarded in a recent visual study showing that late bilinguals are able to anticipate upcoming words, at least in sentence reading (Foucart, Martin, Moreno & Costa, Reference Foucart, Martin, Moreno and Costa2014; but see Martin, Thierry, Kuipers, Boutonnet, Foucart & Costa, Reference Martin, Thierry, Kuipers, Boutonnet, Foucart and Costa2013). As mentioned earlier, however, listening to sentences in L2 does create a complex linguistic situation affecting comprehension fluency. Hence, the use of anticipation processes might be reduced (or absent) during speech processing. This does not mean that anticipation mechanisms are not available in L2 but rather that late bilinguals might not be able to apply them in complex situations. To verify this hypothesis we conducted the same ERP experiment used by Foucart et al. (Foucart, Ruiz-Tada & Costa, Reference Foucart, Ruiz-Tada and Costa2015).

In their study, Foucart et al. investigated word anticipation in L1 speech comprehension. They presented auditory highly constrained Spanish sentences to native speakers and time-locked the ERPs on the article preceding the critical noun. Importantly, the critical noun was muted to avoid overlapping effects. The results revealed an early (200—280 ms) and a late negativity (450–900 ms) for articles that mis-matched the gender of the expected nouns. The authors took the modulation of the magnitude of the N400 as the evidence that anticipation processes are at play during L1 speech processing. In addition, following the sentence listening task, they conducted a lexical recognition task that revealed that, although both ‘expected’ and ‘unexpected’ words were muted during the listening phase, ‘expected’ words were (falsely) recognized significantly more often than ‘unexpected’ words, and as often as ‘old’ words that were actually presented. The authors suggested that anticipation processes allow creating a memory trace of a word prior to presentation.

The present study

In the present study we conducted the same listening and lexical recognition tasks as in Foucart et al. (Reference Foucart, Ruiz-Tada and Costa2015) to obtain two indices of anticipation processes in L2. The main index of interest is the presence of anticipation processes during online speech processing examined using ERPs (the listening task). The secondary index is the consequence of anticipation processes on lexical recognition. We hypothesized that, if late bilinguals anticipate words during listening comprehension, a larger N400-like component should be observed for articles whose gender mis-matches that of the expected noun. On the other hand, if anticipation processes are reduced (or absent) during speech processing, no differences should be observed. On a secondary level, the results of the lexical recognition task should indicate whether anticipation processes in L2 allow the creation of a memory trace of the (muted) expected word, like in L1. If it is indeed the case, late bilinguals should (falsely) assess expected words as previously heard more often than unexpected words. On the other hand, if they rely more on integration processes during listening comprehension, expected words should be treated as ‘new’ words.

Method

Participants

Twenty-two French–Spanish late bilinguals (16 females) took part in the experiment (participants’ details are reported in Table 1). They had learned Spanish at school and lived in Spain. To participate, they were required to pass a language test (B2 level of the Common European Framework). Participants were also asked to self-assess their proficiency in Spanish for written/oral comprehension/production. They received oral and written information about the procedure. Written consent form was obtained from each participant.

Table 1. Participants’ details (N = 22)

The essential information regarding the method is reported here, the details are available in the Supplementary Material.

Materials and design

Fifty-two high-constrained sentences were designed in Spanish; they had two possible noun outcomes: an expected noun and an unexpected noun (a total of 104 experimental sentences). They were designed so that the expected and unexpected nouns differed in gender (e.g.: el tesoro [masculine] (the treasure) vs. la gruta [feminine] (the cave); see Table 2 for examples of sentences). Another 52 sentences were added as filler sentences and contained nouns that were matched in frequency and length to the experimental nouns and that were presented as ‘old’ words in the recognition task. In each experimental sentence, the noun was completely muted for 500 ms after article offset and then the sentence resumed normally until the end.

Table 2. Example of sentences

Listening phase

Procedure

Participants’ brain activity was recorded as they were listening to sentences silently and answered yes-no comprehension questions (after a third of the sentences) designed to ensure full attention (94% of correct answers; the answers were not further analysed). They were told they would be asked questions about the sentences after the listening phase (it was not specified that it was a lexical recognition task).

EEG recording and data analysis

Visual inspection of the grand mean (Figure 1) revealed a long lasting effect starting around 280 ms and lasting until 680 ms. This negativity was confirmed by a 2 ms-by-2 ms paired t-tests (run from 0 to 1000 ms) for the difference between the Expected and Unexpected conditions. Unstable differences (remaining below p = .05 for less than 30 ms) were discarded (Rugg, Doyle & Wells, Reference Rugg, Doyle and Wells1995). Analyses of variance (ANOVAs) were therefore conducted on the 280–680 ms time-window with the factors Expectation (expected vs. unexpected critical article) and Region defined as Frontal (F7, F3, FC5, FC1, Fz, F8, F4, FC6, FC2), Central (T3, C3, CP5, CP1, Cz, T4, C4, CP6, CP2) and Parietal (T5, P3 P1, O1, Pz, O2, P2, T6, P4)Footnote 1 . The Greenhouse-Geisser correction (Greenhouse & Geisser, Reference Greenhouse and Geisser1959) was applied to all repeated measures with greater than one degree of freedom; the corrected p value is reported.

Figure 1. ERP grand average. Left panel: Event-related potential results for the critical article of the sentence. Time zero indicates the presentation of the article. Black lines depict ERPs measured for expected articles; dotted lines depict ERPs measured for unexpected articles. ERPs measured over single channels at midline sites (Fz, Cz, Pz) and averaged channels at Frontal left (F7, F3, FC5, FC1), Frontal right (F8, F4, FC6, FC2), Central left (T3, C3, CP5, CP1), Central right (T4, C4, CP6, CP2), Parietal left (T5, P3, P1, O1) and Parietal right (O2, P2, T6, P4) regions. The grey rectangle represents the time-windows analysed. Negativity is plotted up. Right panel: Topographic distribution of the difference between the expected and unexpected conditions across the 280–680 ms time-window.

Lexical recognition task

Following the listening phase, participants were presented with the 52 ‘expected’ words, 52 ‘unexpected’ words, 52 ‘old’ words (from the filler sentences) and 52 ‘new’ words (equally matched in length and frequency with the words from the other conditions). Participants were presented with all the words, independently of the list they received in the listening phase (see Table 3). This was done to examine whether the presentation of an unexpected article would reduce the memory trace of a word expected from the context; for example, looking at Table 3, whether the word “tesoro” would be expected to the same extent from sentence Context 1 (expected article) than from Context 2 (unexpected article). Participants were instructed to indicate whether they had heard the word during the listening phase using the yes-no keys. To balance the number of yes-no answers, an extra 52 words were selected from the sentences heard in the listening phase (not analysed).

Table 3. Design of the lexical recognition task in relation to the sentences heard in the listening phase.

*The pirate had the secret map, but he never found the treasure/the cave he was looking for.

*I need to buy my ticket to go to London.

Results

EEG

The ANOVA revealed significant main effects of Expectation (F(1, 21) = 6.66, p = .02) and Region (F(2, 42) = 25.73, p <.001). These two factors did not interact with each other (F(2, 42) = 0.63, p = .53). These results indicate that unexpected articles generated a significantly more negative deflection than expected articles.

Lexical recognition task

Table 4 reports the results in terms of hits and false alarms for each condition (only 26 out of the 52 stimuli of the conditions ‘Old’ and ‘New’ were randomly selected for the analyses to balance the number of items across conditions). Analyses (t-test) were conducted on the percentage of words that were assessed as not heard during the listening phase (i.e., ‘misses’ for the ‘old’ condition and ‘correct rejections’ for the other conditions). Given we had clear predictions, the analyses (reported in Table 5) were conducted only on the conditions of interest. We first took the ‘old’ vs. ‘new’ conditions as the baseline comparison (since they were the only conditions contrasting words that were actually heard during the listening phase against new words). This comparison allowed us to make sure participants were not performing the task at chance (d’ = 1.01), and that therefore, they correctly assessed ‘new’ words as not heard more often than ‘old’ words. Further analyses confirmed our predictions; participants falsely assessed (muted) ‘Expected’ words as heard significantly more often than ‘Unexpected’ words. Furthermore, ‘Expected’ words were falsely considered as ‘old’ words more often than ‘Unexpected’ words. Finally, ‘Expected’ words that were preceded by sentence Context 2 (unexpected article) were assessed as not heard significantly more often than when preceded by Context 1 (expected article), suggesting that the presence of an unexpected article (Context 2) may reduce the context effects on upcoming words. More concretely, the expectation of the word ‘tesoro’ (treasure) is reduced when the sentence context contains the article ‘la’ (instead of ‘el’) since in this case ‘tesoro’ becomes an impossible candidate for the continuation of the sentence.

Table 4. Results of the lexical recognition task presented in terms of percentages of hits and false alarms for each condition. Standard deviations are reported in parentheses.

Table 5. Results of the comparisons of the relevant conditions of the lexical recognition task; t- and p-values are reported. Degree of freedom (21).

Discussion

We report an ERP study examining whether anticipation processes take place during L2 speech comprehension. We hypothesized that late bilinguals may not be able to anticipate upcoming words when listening to sentences due to the difficulty of processing speech in L2. We presented French–Spanish late bilinguals with highly-constrained audio sentences in Spanish triggering an expected noun. We time-locked the ERPs on the article preceding the critical noun, which was muted to avoid overlapping effects. Results revealed a long lasting negativity (280–680 ms), larger for unexpected than for expected articles. A subsequent lexical recognition task was conducted to examine whether anticipation processes allow creating a memory trace of a word that is not actually heard. The results indicated that, although both ‘Expected’ and ‘Unexpected’ nouns were muted during the listening phase, ‘Expected’ nouns were (falsely) recognised as having been heard significantly more often than ‘Unexpected’ words. The implications of these results are discussed below.

The main contribution of the present study is the ERP data associated to expected and unexpected articles. This observation is in line with the L1 literature in visual modality (DeLong et al., Reference DeLong, Urbach and Kutas2005; Foucart et al., Reference Foucart, Martin, Moreno and Costa2014; Martin et al., Reference Martin, Thierry, Kuipers, Boutonnet, Foucart and Costa2013; Otten, Nieuwland & Van Berkum, Reference Otten, Nieuwland and Van Berkum2007; Van Berkum, Brown, Zwitserlood, Kooijman & Hagoort, Reference Van Berkum, Brown, Zwitserlood, Kooijman and Hagoort2005; Wicha, Moreno & Kutas, Reference Wicha, Moreno and Kutas2004, Reference Wicha, Moreno and Kutas2003) and auditory modality (Otten et al., Reference Otten, Nieuwland and Van Berkum2007; Van Berkum et al., Reference Van Berkum, Brown, Zwitserlood, Kooijman and Hagoort2005; Wicha, Bates, Moreno & Kutas, Reference Wicha, Bates, Moreno and Kutas2003) as they similarly reveal a modulation of the ERP component in response to unexpected itemsFootnote 2 . Importantly, the present N400-like converges with the effect observed in L2 sentence reading (Foucart et al., Reference Foucart, Martin, Moreno and Costa2014; but see Martin et al., Reference Martin, Thierry, Kuipers, Boutonnet, Foucart and Costa2013), with the only difference that the present effect had a slightly earlier onset (as usually found in speech comprehension; Van Petten, Coulson, Rubin, Plante & Parks, Reference Van Petten, Coulson, Rubin, Plante and Parks1999), and lasted longer (up to 680 ms instead of 500 ms). A long lasting negativity was also reported in native speakers with the same material and procedure (Foucart et al., Reference Foucart, Ruiz-Tada and Costa2015), so it could be attributed to the experimental design. Note that the long-lasting negativity observed here could also be the combination of two components (an early and a later negativity), with the first negativity reflecting a conflict at phonological level related to the expected word, followed by a delayed N400-like effect (see Foucart et al., Reference Foucart, Ruiz-Tada and Costa2015 for similar proposal). The study does not allow us to tell apart these two alternatives. Importantly, the ERP results suggest that, although speech processing is more difficult than written processing, late bilinguals are able to use the sentence context incrementally to predict upcoming words and their features. Consequently, it implies that similar processes take place during L1 and L2 online speech comprehension.

The other contribution of the study is the effect of anticipation processes on lexical recognition. As mentioned above, the lexical recognition task following the main experiment was designed to explore the recollection of words that were predicted but were never heard. The results show that when a word was expected from the context it was (falsely) recognised significantly more often than an unexpected word, even though both were muted. Moreover, ‘Expected’ words were (again falsely) considered as ‘old’ words (that were actually heard) more often than ‘Unexpected’ words. Finally, the results also suggest that the presence of an unexpected article in the sentence context reduces context effects on upcoming words (‘Expected’ words were not recognised to the same extent depending on whether they were preceded by Context 1 or Context 2). These results are very similar to those observed in native speakers (Foucart et al., Reference Foucart, Ruiz-Tada and Costa2015) and imply that, in L2 speech comprehension like in L1, anticipation processes allow a memory trace of a word to be created prior to presentation. In other words, upcoming words are somehow ‘hallucinated’ and the L2 word recognition system benefits from lexical pre-activation. These results are also evidence that, although speech comprehension is effortful, late bilinguals do not rely only on integration processes during speech comprehension.

This proposal is in line with theories of perception and sensory predictability which claim that top-down information is used to generate a prediction and bottom-up information detect whether the prediction is correct (Arnal & Giraud, Reference Arnal and Giraud2012; Friston, Reference Friston2005; Wacongne, Changeux & Dehaene, Reference Wacongne, Changeux and Dehaene2012). Applying this reasoning to speech perception, we can assume that late bilinguals, like native speakers, use anticipation processes to create a template of an upcoming word (top-down). If the actual input (bottom-up) is not presented, as it was the case here since the word was muted, only the template remains, creating a ‘hallucination’ of the word (see for example, SanMiguel, Widmann, Bendixen, Trujillo-Barreto & Schröger, Reference SanMiguel, Widmann, Bendixen, Trujillo-Barreto and Schröger2013).

Overall, our findings complement those that previously reported context effects in L2 (Chambers & Cooke, Reference Chambers and Cooke2009; Fitzpatrick & Indefrey, Reference Fitzpatrick and Indefrey2009; Lagrou et al., Reference Lagrou, Hartsuiker and Duyck2012) as they show that anticipation processes are at play during L2 speech comprehension. Hence, it suggests that the reported facilitation of word recognition is not only due to integration but also to anticipation. Furthermore, our results also indicate that anticipation processes allow a word to be ‘hallucinated’, which further supports the claim that late bilinguals do not rely only on integration processes in complex linguistic situations (here, listening). The results support a top-down view of L2 speech processing.

Supplementary Material

For supplementary material accompanying this paper, visit http://dx.doi.org/10.1017/S1366728915000486

Footnotes

*

This work was supported by grants from the Spanish Government (PSI2011-23033, CONSOLIDER-INGENIO2010 CSD2007-00048, ECO2011-25295, and ECO2010-09555-E), from the Catalan Government (SGR 2009-1521), from the 7th Framework Programme (AThEME 613465) and from the Grup de Recerca en Neurociència Cognitiva (GRNC) -2014SGR1210.

1 An analysis including the factors Expectation, Region and Hemisphere (Frontal left: F7, F3, FC5, FC1; Frontal right: F8, F4, FC6, FC2; Central left: T3, C3, CP5, CP1; Central right: T4, C4, CP6, CP2; Parietal left: T5, P3 P1, O1; Parietal right: O2, P2, T6, P4) revealed that the factor Expectation did not interact with Hemisphere (Expectation x Hemisphere: F(1, 21) = 0.20, p = .65); Expectation x Region x Hemisphere: F(2, 42) = 1.16, p = .32)).

2 The ERP modulation has mainly been reported as a negativity, but a positivity has also been observed. The origin of this difference in polarity is still unclear, but the important point is that all studies report an ERP modulation for words inconsistent with the prediction.

References

Arnal, L. H., & Giraud, A.-L. (2012). Cortical oscillations and sensory predictions. Trends in Cognitive Sciences, 16 (7), 390–8. doi:10.1016/j.tics.2012.05.003 CrossRefGoogle ScholarPubMed
Chambers, C. G., & Cooke, H. (2009). Lexical competition during second-language listening: sentence context, but not proficiency, constrains interference from the native lexicon. Journal of Experimental Psychology. Learning, Memory, and Cognition, 35 (4), 1029–40. doi:10.1037/a0015901 CrossRefGoogle Scholar
Corley, M., MacGregor, L. J., & Donaldson, D. I. (2007). It's the way that you, er, say it: hesitations in speech affect language comprehension. Cognition, 105 (3), 658–68. doi:10.1016/j.cognition.2006.10.010 CrossRefGoogle ScholarPubMed
Cutler, A., Mehler, J., Norris, D., & Segui, J. (2002). The syllable's differing role in the segmentation of French and English. In Altmann, G. (Ed.), Psycholinguistics: Critical concepts in psychology (pp. 115135). London: Routledge.Google Scholar
DeLong, K. A., Urbach, T. P., & Kutas, M. (2005). Probabilistic word pre-activation during language comprehension inferred from electrical brain activity. Nature Neuroscience, 8 (8), 1117–21. doi:10.1038/nn1504 CrossRefGoogle ScholarPubMed
Fitzpatrick, I., & Indefrey, P. (2009). Lexical Competition in Nonnative Speech Comprehension. Journal of Cognitive Neuroscience, 22, 11651178.CrossRefGoogle Scholar
Foucart, A., Martin, C. D., Moreno, E. M., & Costa, A. (2014). Can Bilinguals See It Coming? Word Anticipation in L2 Sentence Reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, 1461–9.Google Scholar
Foucart, A., Ruiz-Tada, E., & Costa, A. (2015). How do you know I was about to say “book”? Anticipation processes affect speech processing and lexical recognition. Language, Cognition and Neuroscience, 30 (6), 768780. doi:10.1080/23273798.2015.1016047 CrossRefGoogle Scholar
Friston, K. (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 360 (1456), 815–36. doi:10.1098/rstb.2005.1622 CrossRefGoogle ScholarPubMed
Greenhouse, S. W., & Geisser, S. (1959). On methods in the analysis of profile data. Psychometrika, 24, 95112.CrossRefGoogle Scholar
Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: neural bases of semantic context use in the native language. Brain and Language, 132, 16. doi:10.1016/j.bandl.2014.01.009 CrossRefGoogle ScholarPubMed
Kuperberg, G. R., Paczynski, M., & Ditman, T. (2011). Establishing causal coherence across sentences: an ERP study. Journal of Cognitive Neuroscience, 23 (5), 1230–46. doi:10.1162/jocn.2010.21452 CrossRefGoogle ScholarPubMed
Kutas, M., & Hillyard, S. (1980). Reading senseless sentences: Brain potentials reflect semantic incongruity. Science, 207, 203205.CrossRefGoogle ScholarPubMed
Lagrou, E., Hartsuiker, R. J., & Duyck, W. (2012). The influence of sentence context and accented speech on lexical access in second-language auditory word recognition. Bilingualism: Language and Cognition, 16 (03), 508517. doi:10.1017/S1366728912000508 CrossRefGoogle Scholar
Lau, E. F., Holcomb, P., & Kuperberg, G. R. (2013). Dissociating N400 effects of prediction from association in single-word contexts. Journal of Cognitive Neuroscience, 25 (3), 484502. doi:10.1162/jocn_a_00328 CrossRefGoogle ScholarPubMed
MacGregor, L. J., Corley, M., & Donaldson, D. I. (2010). Listening to the sound of silence: disfluent silent pauses in speech have consequences for listeners. Neuropsychologia, 48 (14), 3982–92. doi:10.1016/j.neuropsychologia.2010.09.024 CrossRefGoogle Scholar
Martin, C. D., Thierry, G., Kuipers, J.-R., Boutonnet, B., Foucart, A., & Costa, A. (2013). Bilinguals reading in their second language do not predict upcoming words as native readers do. Journal of Memory and Language, 69 (4), 574588. doi:10.1016/j.jml.2013.08.001 CrossRefGoogle Scholar
Myers, J. L., & O’Brien, E. J. (1998). Accessing the discourse representation during reading. Discourse Processes, 26, 131157.CrossRefGoogle Scholar
Navarra, J., & Soto-Faraco, S. (2007). Hearing lips in a second language: visual articulatory information enables the perception of second language sounds. Psychological Research, 71 (1), 412. doi:10.1007/s00426-005-0031-5 CrossRefGoogle Scholar
Neely, J. H. (1977). Semantic priming and retrieval from lexical memory: roles of inhibitionless spreading activation and limited-capacity attention. Journal of Experimental Psychology General, 106, 226–54.CrossRefGoogle Scholar
Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8, 89. doi:10.1186/1471-2202-8-89 CrossRefGoogle ScholarPubMed
Paczynski, M., & Kuperberg, G. R. (2012). Multiple Influences of Semantic Memory on Sentence Processing: Distinct Effects of Semantic Relatedness on Violations of Real-World Event/State Knowledge and Animacy Selection Restrictions. Journal of Memory and Language, 67 (4), 426448. doi:10.1016/j.jml.2012.07.003 CrossRefGoogle ScholarPubMed
Pallier, C., Colomé, A., & Sebastián-Gallés, N. (2001). The influence of native-language phonology on lexical access: exemplar-Based versus abstract lexical entries. Psychological Science, 12 (6), 445–9. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11760129 CrossRefGoogle ScholarPubMed
Rugg, M. D., Doyle, M. C., & Wells, T. (1995). Word and nonword repetition within- and across-modality: an event-related potential study. Journal of Cognitive Neuroscience, 7 (2), 209–27. doi:10.1162/jocn.1995.7.2.209 CrossRefGoogle ScholarPubMed
SanMiguel, I., Widmann, A., Bendixen, A., Trujillo-Barreto, N., & Schröger, E. (2013). Hearing silences: human auditory processing relies on preactivation of sound-specific brain activity patterns. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 33 (20), 8633–9. doi:10.1523/JNEUROSCI.5821-12.2013 CrossRefGoogle ScholarPubMed
Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: evidence from ERPs and reading times. Journal of Experimental Psychology. Learning, Memory, and Cognition, 31 (3), 443–67. doi:10.1037/0278-7393.31.3.443 CrossRefGoogle ScholarPubMed
Van Petten, C., Coulson, S., Rubin, S., Plante, E., & Parks, M. (1999). Time course of word identification and semantic integration in spoken language. Journal of Experimental Psychology. Learning, Memory, and Cognition, 25 (2), 394417. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10093207 CrossRefGoogle ScholarPubMed
Wacongne, C., Changeux, J.-P., & Dehaene, S. (2012). A neuronal model of predictive coding accounting for the mismatch negativity. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 32 (11), 3665–78. doi:10.1523/JNEUROSCI.5003-11.2012 CrossRefGoogle ScholarPubMed
Wang, Y., Xiang, J., Vannest, J., Holroyd, T., Narmoneva, D., Horn, P., Liu, Y., Rose, D., deGrauw, T., & Holland, S. (2011). Neuromagnetic measures of word processing in bilinguals and monolinguals. Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology, 122 (9), 1706–17. doi:10.1016/j.clinph.2011.02.008 CrossRefGoogle ScholarPubMed
Wicha, N. Y. Y., Bates, E. A., Moreno, E. M., & Kutas, M. (2003). Potato not Pope: human brain potentials to gender expectation and agreement in Spanish spoken sentences. Neuroscience Letters, 346 (3), 165168. doi:10.1016/S0304-3940(03)00599-8 CrossRefGoogle Scholar
Wicha, N. Y. Y., Moreno, E. M., & Kutas, M. (2003). Expecting gender: an event related brain potential study on the role of grammatical gender in comprehending a line drawing within a written sentence in Spanish. Cortex; a Journal Devoted to the Study of the Nervous System and Behavior, 39 (3), 483508. Retrieved from http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3392191&tool=pmcentrez&rendertype=abstract CrossRefGoogle Scholar
Wicha, N. Y. Y., Moreno, E. M., & Kutas, M. (2004). Anticipating words and their gender: an event-related brain potential study of semantic integration, gender expectancy, and gender agreement in Spanish sentence reading. Journal of Cognitive Neuroscience, 16 (7), 1272–88. doi:10.1162/0898929041920487 CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Participants’ details (N = 22)

Figure 1

Table 2. Example of sentences

Figure 2

Figure 1. ERP grand average. Left panel: Event-related potential results for the critical article of the sentence. Time zero indicates the presentation of the article. Black lines depict ERPs measured for expected articles; dotted lines depict ERPs measured for unexpected articles. ERPs measured over single channels at midline sites (Fz, Cz, Pz) and averaged channels at Frontal left (F7, F3, FC5, FC1), Frontal right (F8, F4, FC6, FC2), Central left (T3, C3, CP5, CP1), Central right (T4, C4, CP6, CP2), Parietal left (T5, P3, P1, O1) and Parietal right (O2, P2, T6, P4) regions. The grey rectangle represents the time-windows analysed. Negativity is plotted up. Right panel: Topographic distribution of the difference between the expected and unexpected conditions across the 280–680 ms time-window.

Figure 3

Table 3. Design of the lexical recognition task in relation to the sentences heard in the listening phase.

Figure 4

Table 4. Results of the lexical recognition task presented in terms of percentages of hits and false alarms for each condition. Standard deviations are reported in parentheses.

Figure 5

Table 5. Results of the comparisons of the relevant conditions of the lexical recognition task; t- and p-values are reported. Degree of freedom (21).

Supplementary material: PDF

Foucart supplementary material

Foucart supplementary material 1

Download Foucart supplementary material(PDF)
PDF 93.5 KB