Introduction
Language differentiation of bilingual children has been the focus of bilingualism research for a long time. Two opposing views have been presented: the unitary language system hypothesis by Volterra and Taeschner (Reference Volterra and Taeschner1978), which argues that children first have a unitary vocabulary and grammar and that linguistic systems are finally differentiated around the age of three years, and the opposing dual language system hypothesis, proposed by Genesee (Reference Genesee1989), positing that children acquiring two languages from birth establish two separate linguistic systems from the onset of acquisition. Since then, an increasing volume of evidence has been presented supporting the dual language hypothesis by showing bilingual children's language differentiation in phonology (Deuchar & Quay, Reference Deuchar and Quay2000; Paradis, Reference Paradis2001), lexicon (Pearson, Fernández & Oller, Reference Pearson, Fernández and Oller1995), and syntax (Meisel, Reference Meisel2001; Paradis & Genesee, Reference Paradis and Genesee1996). Although the topic is less studied, some research on pragmatic language differentiation has indicated that bilingual children can use their languages differentially and appropriately according to the language of their interlocutor by the age of two years (Comeau, Genesee & Lapaquette, Reference Comeau, Genesee and Lapaquette2003; Genesee, Boivin & Nicoladis, Reference Genesee, Boivin and Nicoladis1996; Genesee, Nicoladis & Paradis, Reference Genesee, Nicoladis and Paradis1995; Montanari, Reference Montanari2009; Nicoladis & Genesee Reference Nicoladis and Genesee1996). An interesting and important question, however, is whether children can already show their ability to differentiate pragmatically the languages acquired during the early phases of language development, when they are moving from the prelinguistic stage toward production of their first words and signs.
In this study we addressed the question of early pragmatic differentiation by examining the unique population of hearing children of DeafFootnote 1 parents (KODA; Kids of Deaf Adults). KODA children provide an interesting setting for studying language use in different contexts since their bilingual language acquisition is bimodal. They simultaneously acquire sign language in a visual–gestural modality and spoken language in an auditory–vocal modality, and hence, compared with bilingual children acquiring two spoken languages, it is easier to identify the communication mode(s) (manual, mixed or vocal) a KODA child is using and the language(s) he/she is producing in each utterance. In our study we especially targeted the use of gestures when studying early pragmatic differentiation of KODA children, since gestures are intertwined with the structures of sign language but generally used only to support the content of spoken language (Goodwyn & Acredolo, Reference Goodwyn and Acredolo1998; Liddell, Reference Liddell2003). We were interested in whether early pragmatic differentiation in KODA children can be detected not only in the number of signs and words produced but also in the way these children use gestures with different interlocutors. Research literature provides some indications that KODA children may use gestures differently and in more diverse ways than children who acquire spoken languages (Capirci, Iverson, Montanari & Volterra, Reference Capirci, Contaldo, Caselli and Volterra2002; Morgenstern, Caët & Collombel-Leroy, Reference Morgenstern, Caët and Collombel-Leroy2010), but prior publications have been case studies and they have not targeted this issue from the perspective of pragmatic differentiation. More research on this topic is therefore needed. As bimodal bilingualism enables one to study pragmatic differentiation in the very early phases of language development, the present study can provide important information on whether children under the age of two can already show abilities to use items from their two languages differently, depending on the context. We were also interested in the sophistication of these skills.
Bilingual children's sensitivity to the language of their interlocutor
As mentioned above, previous investigators have suggested that children growing up bilingual show pragmatic language differentiation before their second birthday. That is, at that age they are generally able to use their two languages differentially and appropriately according to the language of their conversational partner (Comeau et al., Reference Comeau, Genesee and Lapaquette2003; Deuchar & Quay, Reference Deuchar and Quay2000; Genesee et al., Reference Genesee, Nicoladis and Paradis1995, Reference Goldin-Meadow1996; Nicoladis & Genesee, Reference Nicoladis and Genesee1996). To our knowledge, only Maneva and Genesee (Reference Maneva, Genesee, Skarabela, Fish and Do2002) and Poulin-Dubois and Goodz (Reference Poulin-Dubois and Goodz2001) have focused on infants growing up bilingual when exploring their sensitivity to the language use of their interlocutors in the pre-lexical stage. Maneva and Genesee (Reference Maneva, Genesee, Skarabela, Fish and Do2002) reported on the development of a 10–14-month-old infant who produced different kinds of phonological patterns when babbling to his French-speaking father in comparison with his English-speaking mother. However, in the study by Poulin-Dubois and Goodz (Reference Poulin-Dubois and Goodz2001) on 13 bilingual infants (the mean age of the children was 12 months), the researchers found that most often the infants babbled in their dominant language, and no differences were detected in their babbling when the infants were communicating with either French-speaking or English-speaking interlocutors. The inconclusiveness of evidence in pragmatic language differentiation is probably only virtual, as the two above-mentioned studies focused on different aspects of babbling. However, the observation of Maneva and Genesee (Reference Maneva, Genesee, Skarabela, Fish and Do2002) on pragmatic differentiation of a bilingual infant in the babbling stage deserves further investigation.
Although the children may well show language differentiation pragmatically, from time to time it may be difficult to detect the language children use. Namely, vocal expressions of children under two years of age are often restricted and unintelligible. The children may produce lexical items that cannot be judged as belonging to either of the languages being acquired (referred to as the class of neutrals e.g. by Petitto, Kateleros, Levy, Gauna, Tétreault & Ferraro, Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001), making it hard to detect which language they are using at each moment (Deuchar & Quay, Reference Deuchar and Quay2000; Montanari, Reference Montanari2009; Petitto et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001). Moreover, Montanari (Reference Montanari2009) has emphasized that when a researcher aims to study the language choice of bilingual children to address the issue of pragmatic differentiation, it is necessary to examine whether the children studied have enough lexical resources to really demonstrate appropriate language choice (see e.g. Nicoladis & Secco, Reference Nicoladis and Secco2000, for an analysis of lexical and pragmatic differentiation). Lexical gaps in one of the languages being acquired are often found to be the reason for bilingual children's inappropriate language choice (Deuchar & Quay, Reference Deuchar and Quay2000; Montanari, Reference Montanari2009; Nicoladis & Genesee, Reference Nicoladis and Genesee1996). Deuchar and Quay (Reference Deuchar and Quay2000) showed that only after children have over 100 words in their productive vocabulary do they have enough lexical resources to be able to demonstrate the appropriate language choice.
Based on the research literature it seems clear that it is difficult to study pragmatic differentiation in very young children who are acquiring two spoken languages. However, during the prelinguistic period – from around four to six months onward – typically developing infants already show sensitivity and accurate responsiveness to various affective signals and communication cues from their interactional partners (see Papoušëk, Reference Papoušëk2012). Therefore, it appears important also to describe whether pragmatic differentiation can already be detected during the early phases of bilingual language acquisition. To address this question we need to study languages that can be easily separated from each other. This advantage is obtained in studying KODA children, whose language acquisition is both bimodal and bilingual. However, one is not restricted only to signs of a sign language and words of a spoken language; it is also possible to study differentiation and use of communication mode, including gestures and vocalizations, and hence investigate children's pragmatic differentiation during the prelinguistic period.
KODA children form a unique population for research on pragmatic differentiation of languages
Bilingual KODA children acquire linguistic units not only in two physically different modalities, but also in two languages that have many structural differences. In the following we first describe the modality and simultaneous use of two languages by KODA children to shed light on their unique value as subjects when studying early pragmatic differentiation. After that we concentrate on the different functions gestures serve in sign language compared with spoken language and how this knowledge could provide new perspectives in studying pragmatic differentiation of languages in bilingual children.
Clear modality difference between the languages being acquired
Language differentiation of KODA children has previously been studied, for example, by Petitto et al. (Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001). In their case study a KODA child already showed appropriate language use according to his interlocutor at the age of 11 months. He used spoken language in 59% (compared with signs in 29%) of the utterances he produced when interacting with a French-speaking researcher. When interacting with a signing interlocutor he increased the use of sign language to 57% and decreased the use of spoken language to 34% of his utterances. The rest of the utterances he produced were mixed ones containing both signs and spoken words. The researchers noticed that because of the clear modality difference between this child's languages, it was easy to detect the language he was producing in each utterance. Thus evidently, no neutrals emerged in his productions, which helped in performing the analyses (Petitto et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001, p. 471).
Simultaneous use of signs and words
Another unique feature in the language use of KODA children is that they are able to produce both signs and words at the same time. This type of language mixing has also been referred to as code-blending by Emmorey, Borinstein, Thompson and Gollan (Reference Emmorey, Borinstein, Thompson and Gollan2008). Its frequent use by Deaf parents and KODA children has been reported in many studies (Kanto, Huttunen & Laakso, Reference Kanto, Huttunen and Laakso2013; Mallory, Zingle & Schein, Reference Mallory, Zingle and Schein1993; Petitto et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001; Preston, Reference Preston1994, p. 128; Van den Bogaerde & Baker, Reference Van den Bogaerde and Baker2008; Wilhelm, Reference Wilhelm2008). In some studies the high rate of language mixing used by Deaf parents or other interlocutors has affected KODA children who have also been found frequently to mix sign and speech (Petitto et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001; Van den Bogaerde & Baker, Reference Van den Bogaerde and Baker2008). In addition to the high frequency of language mixing by KODA children, mixing has also been observed to be highly systematic in nature. This means that it takes place in a consistent manner (Pettito et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001). However, language choice and mixing typical of Deaf parents and KODA children make it difficult for researchers to be able to really detect the children's differentiated language use according to the interlocutor.
Gestures in bimodal communication of KODA children
Gestures as a part of linguistic systems of signed languages
Unlike in spoken languages, gestures are fully integrated into the structure of sign language. It has been proposed that production of pronouns, agreement verbs, and so-called classifier constructions co-includes both linguistic and gestural components. Thus, for example, the handshape of a pronoun (e.g. a personal pronoun) or an agreement verb is linguistically specified, but the sign's direction toward persons or objects and its location in signing space are regarded as representational gestural components (Emmorey & Herzig, Reference Emmorey, Herzig and Emmorey2003; Liddell, Reference Liddell2003). Additionally, by producing classifier constructions the signer can iconically describe the motions, locations, shapes and spatial relations of objects.
The developmental continuum from gesture to sign has been recognized both in the evolution of sign languages and in children's early sign language acquisition. In the latter, a developmental path can be found from gestures to linguistic forms (Cheek, Cormier, Repp & Meier, Reference Cheek, Cromier, Repp and Meier2001; Hoiting & Slobin, Reference Hoiting, Slobin, Duncan, Cassell and Levy2007; Volterra & Iverson, Reference Volterra, Iverson, Emorrey and Reilly1995). For children acquiring a spoken language, gestures may, instead, convey meaning usually only in conjunction with, or by substituting for, spoken words, not as a part of linguistic structures. Especially children acquiring sign language have been shown to use gestures linguistically, and they are in the process of conventionalizing gestures and control of signing space toward a linguistic structure (Hoiting & Slobin, Reference Hoiting, Slobin, Duncan, Cassell and Levy2007; Petitto, Reference Petitto and Kessel1988). Thus, gestures have a different function in a conversation held in sign language from that which they have in spoken language.
Frequent and multifaceted use of gestures by KODA children
Gestures are regarded as the most important indicator of children's early intentional communication and they are even seen to represent a larger proportion of children's communicative acts than words do before the age of 20 months (Goldin-Meadow, Reference Goldin-Meadow2003). So, before children are able to produce words/signs or multiword/multisign utterances, they can use both vocal and manual modalities to communicate in a symbolic manner (Capirci et al., Reference Capirci, Iverson, Montanari and Volterra2002; Volterra, Iverson & Castrataro, Reference Volterra, Iverson, Castrataro, Schick, Marschark and Spencer2005). Although gestures serve an important role in conveying messages for all children during the pre-linguistic period, for KODA children their role and significance may be even more important. One would expect that a KODA child would use gestures more frequently than a child acquiring spoken language, as she/he is also receiving input from a manual modality more frequently than children of hearing parents.
There is some research evidence supporting the notion that KODA children use a greater number of gestures than other children. Both Capirci et al. (Reference Capirci, Iverson, Montanari and Volterra2002) and Morgenstern et al. (Reference Morgenstern, Caët and Collombel-Leroy2010) have reported extensive use of gestures and manual modality by KODA children. For example, Capirci et al. (Reference Capirci, Iverson, Montanari and Volterra2002) noticed that, compared with monolingual children (N = 12) acquiring spoken language, the bilingual KODA child they studied between the ages of 11 and 29 months made greater and more varied communicative use of the manual modality. This was seen both in the use of so-called symbolic gestures (e.g. opening and closing the mouth to refer to a fish; extending and retracting the index finger to depict a snail) and in the production of sign, gesture, and word combinations. In line with Capirci et al. (Reference Capirci, Iverson, Montanari and Volterra2002), Morgenstern et al. (Reference Morgenstern, Caët and Collombel-Leroy2010) noticed that the sign language, representing the manual modality, a child was acquiring affected his or her use of gestures from the onset of gesture use. They noted that the KODA child they studied from the age of 10 months to the age of 32 months started to use pointing at an earlier age and used pointing more frequently during the data collection period than the hearing monolingual child who also was followed up.
Taken together, as gestures play such an important role in the communication of all young children but, at the same time, have such a central significance and such a different function in sign language from that which they have in spoken language, we were interested in investigating whether pragmatic differentiation of KODA children could be seen not only in linguistic items they produce but also in the way they use gestures with different interlocutors. We studied gesture use of KODA children who were simultaneously acquiring Finnish Sign Language (FinSL), which has rich gestural properties, and spoken Finnish, which is generally viewed as being accompanied by infrequent gesture use (see Huttunen & Pine, Reference Huttunen, Pine, Toyota, Hallonsten and Shchepetunina2012; Huttunen, Pine, Thurnham & Khan, Reference Huttunen, Pine, Thurnham and Khan2013).
Research questions
The aim of the present study was to explore pragmatic differentiation of languages in bimodal and bilingual KODA children by studying their ability to modify their use of languages and communication modes (manual, mixed, vocal). Of these, we analyzed in a more detailed manner the use of gestures according to different interlocutor(s). By including gesture use in the analyses, we were able to take into account the big and significant proportion of pre-linguistic communicative acts in KODA children's productions. However, before addressing the question of pragmatic differentiation, we were interested in exploring KODA children's early lexical resources and investigating how their developing communicative potential in either Finnish Sign Language (FinSL) or spoken Finnish is reflected in their use of signs or words with different interlocutors. To address all these topics, the following research questions were set:
-
(i) How do one- to two-year-old KODA children's productive vocabularies develop in sign language and spoken language, measured with the MCDI (MacArthur Communicative Development Inventory), and what is the association between the size of these vocabularies and the language KODA children use with their interlocutor(s) who represent(s) different languages?
-
(ii) What is these KODA children's ability to accommodate their expressions as a function of the language used by their interlocutor(s)
-
(a) in selecting communication mode(s) (manual, mixed, vocal) and, specifically, in their use of a manual communication mode (number of gestures, signs, and gesture–sign combinations)?
-
(b) in using gestures (number of different gesture types)?
-
(c) in using gestures jointly with vocal communication and sign language?
-
Method
Participants
Eight KODA children, described in Table 1, participated in this study. Deaf-parented families in which one or both parents were deaf were recruited by informing the Finnish Deaf community, and families interested in the research project contacted the first author. One parent was hearing and the other was deaf in five families – those of Heidi, Miina, Miisa, Ari, and Riina – and both parents were deaf in three families, those of Lauri, Paula, and Onni (their names are boldfaced from here on). All the parents had completed a secondary level of education (either an upper secondary school or a vocational school). The Deaf parents were either native users of Finnish Sign Language or had started to use FinSL as a child and currently used it when communicating with other people. Only one hearing parent out of the five was a fluent user of FinSL. All the Deaf parents except Miina's mother reported using exclusively sign language when communicating with their hearing child. However, during the video-recorded play sessions the Deaf parents of Ari and Onni mainly used simultaneous production of signs and spoken words. All the children had regular and consistent exposure to both spoken Finnish and FinSL and were acquiring them simultaneously either at home (five children had one hearing parent), in a day-care center (spending six to eight hours per day there, from three to five days per week), or by regularly meeting hearing and deaf close relatives. During the data collection period four of the children started in full-time day care, Lauri at the age of 12 months, Paula and Heidi after they had turned 18 months, and Onni at the age of 24 months. Three children – Lauri, Paula, and Onni – had weekly contacts with other Deaf people. Heidi, Miina, Miisa, Ari, and Riina met other Deaf people once every two weeks or more rarely. Information about the language contacts of the children was obtained through parental interviews which took place in the beginning of the data collection (see Kanto et al., Reference Kanto, Huttunen and Laakso2013, for more detailed information about the linguistic environments of the children studied).
Table 1. Hearing status of the parents and language exposure of the KODA children studied.

a The child did not attend day care outside the home during the follow-up period.
b In the parental interview the parents reported using sign language when communicating with their hearing child. However, during the video-recorded play sessions the parents of these two children were observed to use simultaneous production of signs and spoken words.
All the children were observed as a part of an ongoing longitudinal study – FinCODA – in which the same researcher (the first author) visited each family biannually from the time the hearing KODA child was 12 months old until the age of 36 months.
Procedure
The data were collected from parental questionnaires and three different sets of video-recorded play sessions at every longitudinal data collecting point (when the child was aged 12, 18, and 24 months).
Parental questionnaires
Information about the children's productive vocabulary – separately in signs and in spoken words – was collected by using the Finnish adaptation (Lyytinen, Reference Lyytinen1999) of the MacArthur Communicative Development Inventory (MCDI, Fenson, Dale, Reznick, Thal, Bates, Hartung, Pethick & Reilly, Reference Fenson, Dale, Reznick, Thal, Bates, Hartung, Pethick and Reilly1991). The Infants version of the MCDI (designed for ages from 8 to 16 months) was used at the age of 12 months and the Toddler version (designed for ages from 16 to 30 months), at the ages of 18 and 24 months. The MCDI forms were sent to the parents two weeks before the researcher's visit and the parents returned the filled-in forms during the home visit.
If there was anything unclear relating to the filling in the MCDI forms, it was possible to clarify the form and discuss it with the parents. The MCDI has not yet been adapted for FinSL, so the same form was used to collect the data on both Finnish and FinSL. However, a few changes and additions were made to the MCDI form (e.g. some signs related to the Deaf culture were added). These modifications were partly based on the changes that Anderson and Reilly (Reference Anderson and Reilly2002) made when adapting the English version of the MCDI for American Sign Language (see also Woolfe, Herman, Roy & Woll, Reference Woolfe, Herman, Roy and Woll2010). However, making the noun-verbal distinction was a challenging task. Features of some FinSL signs are, in certain cases, similar to two items included in the MCDI, and the structure of these morphological forms in FinSL is not known in detail. The production of these signs by the child was therefore discussed with the parents. These signs were usually considered to have two occurrences, but when the parent reported that the child used the sign for only one of the two possible grammatical functions (e.g. “write” or “pen”) the sign was counted only once. Additionally, a supplementary part was added to the Toddler version to collect data on the grammar of FinSL. If both parents of the child were deaf, the child's closest hearing adult (e.g. a grandparent or a person at the child's day care center) filled in the part of the MCDI form that concerned spoken Finnish.
Video-recorded play sessions
The eight children studied were video-recorded at their home at the ages of 12, 18, and 24 months by a hearing researcher (the first author) who is a native user of both spoken Finnish and FinSL. The children had never met the hearing researcher before the first home visit nor had they any contacts with her outside the data collection sessions. The children saw the researcher using FinSL with their parents, so the children were fully aware that the researcher was bilingual in Finnish and FinSL. During the play sessions, the parents were asked to play with their child as they would normally do. A standard set of books and toys suitable for the child's age was provided, but the adults and the children were not restricted to playing with these.
The children's ability to differentiate their languages pragmatically was examined in three different situations at each data collection point. The home visit always contained three different sets of video-recorded play sessions. During the first play session the KODA child communicated with his or her deaf parent and his or her language and gesture use was observed during the session. The hearing researcher then joined in playing with the child and the deaf parent. The aim of this second condition was to observe the language use and communication of the child in a situation where two adults (one deaf and one hearing) were simultaneously present. We especially focused on the child's ability to differentiate spoken Finnish and FinSL from each other and to switch between these languages on the fly. Finally, the child interacted alone with a hearing adult, most often the hearing researcher, who communicated with the child using solely spoken Finnish (naturally accompanied by some gestures). The goal of this third condition was to capture the child's patterns of language use with a hearing interlocutor.
Altogether 65 play sessions were video-recorded: 23 with the deaf parent, 19 with both the deaf parent and the hearing adult, and 23 with the hearing adult alone. Of the 42 play sessions where the deaf parent was present, a deaf mother accompanied the child in 35 sessions and a deaf father in seven sessions. Of the 42 sessions where a hearing person was present, the hearing researcher was replaced by the hearing parent of the child on five occasions (two with a hearing father and three with a hearing mother). This was done because the child did not cooperate with the hearing researcher. The most active five minutes during which the child produced the highest number of expressions and longest utterances with each different interlocutor were selected for the analysis. This meant that altogether 15 minutes from each child at each data point were analyzed.
Coding
Each video-recorded session was transcribed, speech orthographically and signed communication by using glosses (sign-word correspondence written in capital letters), coded, and analyzed by the same hearing researcher who usually participated in two of the three play sessions at each data collection point. ELAN software (see e.g. Lausberg & Sloetjes, Reference Lausberg and Sloetjes2009) was used in the analysis phase.
Coding of language
All communicative (intentional) vocalizations, speech, gestures and signs produced by the children, either spontaneously or produced by imitation, were analyzed. In intentional communication the child directs his/her manual and/or vocal act toward the interlocutor(s) by using eye gaze, body orientation or physical contact and awaits a response from the adult, as evidenced by looking at the adult hesitatingly or persisting in the communicative act (Sarimski, Reference Sarimski2002, p. 489). The children's productions were coded to contain a sign or a word if they were similar in form with adult language and were used in an appropriate context, given the meaning of the adult sign or word (see Lyytinen, Reference Lyytinen1999; Vihman & McCune, Reference Vihman and McCune1994).
Some additional coding criteria for signs were established, because gestures and signs are produced in the same modality and because children's early signs are known to be hard to separate from gestures (Bonvillian, Orlansky & Folven, Reference Bonvillian, Orlansky and Folven1990). In the present study, a communicative manual act was defined as a sign when it had at least one phonetic unit (place, orientation and/or handshape) that resembled the adult form of that sign in FinSL. If the nature of the motor act was not clear, it was defined as a sign if it occurred in the parents’ motherese of sign language. In motherese, Deaf parents simplify the phonological structure of a sign, for example, so that it is easier for a child to understand and produce. A manual act that resembled a sign in all its linguistic features (e.g. handshape, location, orientation and movement of hand) but was not understood by the parents or the hearing researcher was defined as manual babbling (see e.g. Petitto & Marentette, Reference Petitto and Marentette1991). Communicative speech was coded as vocalization if it did not contain any intelligible words but was nonetheless produced for the purpose of intentional communication described above.
Coding of the communication mode
All outputs (gestures, signs, and speech) from the children were divided into utterances. Like Iverson, Capirci, Longobardi and Caselli (Reference Iverson, Capirci, Longobardi and Caselli1999), Petitto et al. (Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001), and Van den Bogaerde and Baker (Reference Van den Bogaerde and Baker2008), by an utterance we mean any sequence of speech, signs, and/or gestures that is preceded and followed by silence, a change in an intonation pattern, or a conversational turn. In the case of gestures and signs of FinSL, silence refers to a pause in communicative motor actions both before and after a sequence that was defined to be an utterance. The sequence also had to be intentionally communicated. All utterances were further grouped into three categories of communication mode. When an utterance consisted of only gestures or signs, it was regarded as a manual utterance. Further, if an utterance contained only vocalization and/or spoken words, it was counted as a vocal utterance. When gestures/signs and vocalization/words both occurred either simultaneously or sequentially within the same utterance, the utterance was classified as a mixed one. The pre-linguistic and lexical constituents of communication modes are presented in Figure 1.

Figure 1. Pre-linguistic and lexical constituents of different communication modes used in the analyses.
Two recordings from the Deaf parent and child play sessions, one representing the age of 12 months and one the age of 18 months, were randomly selected for reliability analysis of the segmentation of the utterances and classification of the communication mode(s) used. Inter-rater reliability of segmentation, as a percentage agreement, between the two coders (the first and third authors) working independently from each other was 87%. The communication mode analyses were in almost perfect agreement with each other; when the child was 12 months old and 18 months old, the inter-rater agreement was 94% and 96% for the child's productions, respectively. Inter-observer agreement exceeding 80% is generally considered sufficient.
Coding of gestures and different expression-type combinations
Gestures can be produced not only with the hands and arms but also with the head (like nodding), the legs (like kicking), or the whole body (change of posture, like moving away). Two criteria were used to ensure that a gesture was functioning as a communicative symbol (see Butcher, Mylander & Goldin-Meadow, Reference Butcher, Mylander and Goldin-Meadow1991; Goldin-Meadow & Mylander, Reference Goldin-Meadow and Mylander1984). First, when a hand movement was produced, it could not be a direct manipulation of some relevant person or object (i.e. it had to be empty-handed; Petitto, Reference Petitto and Kessel1988). No acts performed on objects were included, except when a child was showing (holding up an object to bring it to another person's attention) or giving objects, because these two acts serve the same communicative function as pointing. Secondly, a gesture was not marked if it was a ritual act (e.g. a game or blowing a kiss to someone) (Iverson & Goldin-Meadow, Reference Iverson and Goldin-Meadow2005).
All manual gestures produced by the children were counted. When a child extended a finger, multiple fingers, or a palm toward a referent, the act was considered as pointing. Because pointing gestures are fully integrated into the linguistic system of sign language, categorizing pointing into a non-linguistic or linguistic item is challenging (see Hoiting & Slobin, Reference Hoiting, Slobin, Duncan, Cassell and Levy2007). In the present study, our aim was mainly to observe the role of pointing in the interpersonal communication of bimodal bilingual children, so there was no need to classify pointing as being either linguistic or gestural. To be able to compare the children's gesture use with different interlocutors, we classified pointing as a gesture, although pointing produced by 24-month-old children may already function as a pronominal pointing sign (see e.g. Hatzopoulou, Reference Hatzopoulou2008; Hoiting & Slobin, Reference Hoiting, Slobin, Duncan, Cassell and Levy2007; Petitto, Reference Petitto1994).
After separating pointing gestures to form a category of their own, gestures were then classified into three categories (see Capirci, Contaldo, Caselli & Volterra, Reference Capirci, Contaldo, Caselli and Volterra2005; Iverson, Capirci, Volterra & Goldin-Meadow, Reference Iverson, Capirci, Volterra and Goldin-Meadow2008). Other deictic gestures refer to movements that indicate referents in the immediate environment. This category included showing, giving, and requesting. A gesture was regarded as showing when a child held up an object in an adult's line of sight, as giving when a child gave an object to an adult, and as requesting when a child extended his or her arm toward an object and expressed with an eye-gaze, body movement, repeated palm opening and closing, or vocalization that he or she wished an adult to give the object.
The second category of gestures – iconic gestures – included all gestures that conveyed symbolic meanings by directly referring to their semantic contents in terms of the gesture's handshape, type of movement, and place of articulation (e.g. flapping arms to represent a bird flying) (Capirci et al., Reference Capirci, Contaldo, Caselli and Volterra2005; Iverson et al., Reference Iverson, Capirci, Volterra and Goldin-Meadow2008). The third category included conventional gestures, the meanings of which are often culture-specific (like nodding or lifting a thumb).
We also analyzed how the children combined gestures, vocal communication, and signs together within the same utterance. We categorized all utterances where the children combined (i) two gestures, (ii) a gesture or gestures with words/vocalization and/or signs, and (iii) signs and words/vocalization. This classification is presented and exemplified in Table 2. Altogether, seven different categories were found. A sequence was marked to represent a gesture+gesture combination if a child produced two different gesture types, e.g. pointing and an iconic gesture within the same utterance. This was also marked if a child produced the same gesture, e.g. two pointing gestures, which had two different referents.
Table 2. Classification of gesture, word, and sign combinations.

Statistical analyses
Spearman's rank correlation coefficients were calculated between the MCDI results and the children's production of signs and words during the video-recorded play sessions.
Differences by the interlocutor in the children's use of spoken and sign language, communication mode(s) (manual, mixed, or vocal), different forms of manual modality, and gesture use, together with gesture–sign–word/vocalization combinations were tested with Related-Samples Friedman's Two-Way Analysis of Variance by Ranks. The standard alpha level (p < .05) was adopted. Multiple comparisons were done with the Related-Samples Wilcoxon Signed Rank Test, and the familywise type I error rate was controlled with Bonferroni correction as a post hoc test (resulting in a significance level of p < .017 with three comparisons). All data collection points (at the child's ages of 12, 18, and 24 months) were usually pooled together. Statistical testing was done with SPSS Statistics for Windows software, version 21.0.
Results
Productive vocabularies and language use according to the interlocutor
In our first main research question we were interested in vocabulary development and the association between the KODA children's spoken and sign language vocabularies and these children's communicative behavior with adults in the play sessions. The sizes of the productive vocabularies of each child, measured with the MCDI, are presented in Table 3.
Table 3. KODA children's productive vocabularies in Finnish Sign Language and spoken Finnish, measured with the MCDI. Total vocabulary includes translation equivalents.

*Data were not obtained from Onni at this data point.
At the last data point (at the age of 24 months) six children (Lauri, Onni, Paula, Miisa, Heidi, and Ari) were reported by their parents to produce more signs than words, and two children (Miina and Riina) produced more spoken words than signs.
We then looked at the children's word and sign production in play sessions by interlocutor type (Figure 2). Statistically significant differences were found in the use of signs by language used by the interlocutor (F = 25.794, df = 2, p = .000). When all the ages (12, 18 and 24 months) were pooled together, the KODA children used signs significantly more when communicating with their Deaf parent (mean 17.6 signs) than when communicating with the Deaf parent and the hearing adult together (mean 6.2 signs; Z = −3.043, p = .002), or with the hearing adult alone (mean .7 signs; Z = −3.922, p = .000). Use of spoken language words followed an opposite trend; the KODA children produced a mean of 8.6 words when interacting with their Deaf parent, a mean of 30.7 words when communicating with the Deaf and the hearing adult, and a mean of 29.7 words when only the hearing adult was present. The number of words the 12- to 24-month-old KODA children produced for the hearing adult was significantly higher than the number of words produced for the Deaf parent (Z = 2.573, p = .010).

Figure 2. Mean number of spoken language words and FinSL signs the KODA children produced when playing with different interlocutors at the ages of 12, 18, and 24 months.
Spearman's rank correlation coefficients were calculated to determine the associations between the MCDI productive vocabulary scores – separately in spoken and sign language (equivalents included) – and the number of words and signs (not including imitated words and signs) the children produced during the two video-recorded play sessions with the Deaf parent or the hearing adult. The play session where both the Deaf parent and the hearing person were present was left out of these analyses, because in 15% of the expressions it was not possible to confirm to which adult the child directed his or her conversational turn. Additionally, to alleviate situation-specific factors (e.g. possible dominance of one adult over the other), we calculated the correlations using only play sessions with one adult at a time. When interpreting the correlations presented it should be kept in mind, as Table 1 shows, that the Deaf parent of three children (Onni, Miina, and Ari) used mainly simultaneous production of signs and spoken words during the video-recorded play sessions. First we analyzed the correlations by pooling the data together from all the time points – 12, 18, and 24 months.
Despite the varying competence in language production among the KODA children studied, fairly strong correlations existed between their productive vocabulary in FinSL or spoken Finnish and the language use of the children in the different play sessions. The size of their productive vocabulary in FinSL, measured with the MCDI, was found to correlate positively with the number of signs produced during the play session with the Deaf parent (r = .569, p = .005). The fact that the sign index gained during short video-recorded play sessions correlated significantly with the sign index gained from adapted MCDI forms suggests that both these measures were able to capture the true variance in children's early productive sign language vocabulary. This result also showed that the children were able to use their communicative potential in sign language with their Deaf parent. Additionally, the parent's language mixing was found to affect the results. When the data of the three children (Onni, Miina, and Ari) whose mothers simultaneously used signs and speech (language mixing) during the play sessions were left out, the correlation between the sign language vocabulary and the number of signs the remaining five children produced was even higher (r = .786, p = .001). However, it was interesting that when the interlocutor was a hearing person, no significant correlation was found between the productive sign language vocabulary and the number of signs produced during the play session. Thus, despite their competence in sign language, the children used only a few signs when interacting with the hearing person.
We also found that the size of the productive vocabulary in spoken language, measured with the MCDI, correlated strongly with the number of words produced during the play session with the hearing adult (r = .929, p = .000). The children studied were therefore shown to be able to fully exploit the productive vocabulary of their spoken language capacity with the hearing interlocutor. Again, this result validated the use of relatively short play sessions in the data collection. Further, when the child was interacting with his or her Deaf parent, the productive vocabulary in spoken Finnish did not correlate significantly with the use of spoken language. Thus, in spite of their competence in spoken language, the children refrained from speaking when interacting with their Deaf parent.
We also wanted to study separately at different time points the correlations between the sizes of the children's vocabularies and the number of signs and words they produced during the different play sessions. Our results revealed that the size of their spoken language vocabulary already correlated strongly (r = .905, p = .005, and r = .970, p = .000) with the number of words produced during the play sessions at the ages of 12 and 18 months, respectively, when the interlocutor was a hearing one. So, the spoken language was in effective use very early.
Taken together, these results showed that the short play sessions managed to capture the children's ability in both sign language and spoken language, and their use of their language skills appeared to be dependent on the language of their interlocutor. This observation gave an indication on the children's early ability to show pragmatic differentiation of the languages they are acquiring. Next we continued to investigate in a more detailed manner how these children accommodated their expressions, including pre-lexical elements (gestures and vocalizations), as a function of the interlocutor type.
Ability to accommodate expressions according to the interlocutor(s)
In our second research question we wanted to explore KODA children's ability to accommodate their expressions from the viewpoint of communication mode(s) (including a more refined analysis of the use of gestures) as a function of the language used by their interlocutor(s) at 12, 18, and 24 months of age.
Selection of communication mode(s)
We first analyzed how the children used manual, mixed, and vocal modes of communication with different interlocutor types (see Figure 3). The pattern of using different communication modes by each interlocutor type was seen at all the video-recorded time points, but the preference for the manual modality with the Deaf parent and the vocal modality with the hearing adult was most distinct at the age of 24 months. Use of a mixed modality was quite constant during the three play sessions at all the data collection points; no statistical differences were noticed as a function of different interlocutor types.

Figure 3. Use of different communication modes by KODA children during three different play sessions at the ages of 12, 18, and 24 months. The lines depict the mean share of the children's utterances (in percentage) in each communication mode(s). The mean number of utterances, covering all communication modes, produced by each child during each play session is shown in parentheses.
We found that young children were already sensitive to the language used by their interlocutor(s). They were able to accommodate their use of a communication mode at the very early age of 12 months. As Figure 3 shows, at all ages the children used a manual modality (signs and gestures) strikingly more when they communicated with their Deaf parent alone compared with the frequency of using a manual modality during other play sessions (F = 21.895, df = 2, p = .000).
After the hearing adult joined the play session, the children increased their use of a vocal modality (spoken words and vocalization, Z = 2,839, p = .005) and decreased their use of the manual modality (Z = −3.463, p = .000). To show how children were able to accommodate their use of communication mode on the fly when both the Deaf parent and a hearing adult were present, we analyzed the communication mode of each utterance the child directed separately to the Deaf parent and separately to the hearing adult. It should be noted that in 15% of the children's utterances we were not able to confirm to whom they were directed. All three data points were pooled together. The analysis revealed that the children favored the vocal modality when addressing their utterances to the hearing adult (in that case a mean of 66% of the children's utterances were vocal) and the manual modality when addressing their utterances to their Deaf parent (in that case a mean of 65% of the utterances were manual). We then examined if any changes took place over time. The children's use of the vocal modality during this second play session was significantly greater at the age of 24 months than during the two earlier data collection points (Z = 2.023, p = .043 in both cases). This preference for increasing the use of the vocal modality was based on an increase in the number of utterances the children directed to the hearing adult. That is to say, at the age of 12 months the children directed a mean of 28% of their utterances to the Deaf adult and a mean of 58% to the hearing adult. At the age of 24 months the children directed only a mean of 15% of all utterances to the Deaf parent and a mean of 72% toward the hearing interlocutor.
During the last play session where only the child and the hearing adult were present, the children further increased their use of the vocal modality; the difference in its use compared with the session with the Deaf parent was statistically significant (Z = 3.564, p = .000).
These results clearly show that KODA children are able to accommodate their use of languages according to the language of their interlocutor(s) and already show pragmatic differentiation at an early age. Despite a clear preference for using a communication mode according to the language of the interlocutor, the children used both vocal and manual modalities of communication during each play session. It is well established that all children – despite the language they are acquiring – use a manual modality in their communication (for a review, see Volterra, Iverson & Castrataro, Reference Volterra, Iverson, Castrataro, Schick, Marschark and Spencer2005). However, some studies suggest that KODA children may use a manual modality differently from children who are acquiring a spoken language (Capirci et al., Reference Capirci, Iverson, Montanari and Volterra2002; Morgenstern et al., Reference Morgenstern, Caët and Collombel-Leroy2010). We therefore studied in a more detailed manner how the KODA children we followed up used a manual communication mode with different interlocutors.
Use of a manual communication mode
In analyzing whether the KODA children studied used a manual communication mode differently by interlocutor type, we looked at whether there was any difference in how the children used gestures, signs and gesture–sign combinations within the manual modality. We could thereby follow how their sign language developed over time and how they used prelinguistic gestures and signs of FinSL differently according to their interlocutors. For that purpose we classified the children's expressions from the viewpoint of use of gestures, gesture–sign combinations, and signs of FinSL.
When the children were playing with their Deaf parent, their manual utterances mostly consisted of gestures or signs, with the number of signs, in particular, increasing with age (Figure 4). When the children shared the play session with both their Deaf parent and the hearing interlocutor, they still produced mostly either gestures or signs in their manual utterances, but the number of both gestures (Z = −3.130, p = .002) and signs (Z = −2.387, p = .017) was significantly lower than when only the Deaf parent was present. In the play sessions where they were accompanied by only the hearing adult, the children almost exclusively used gestures; signs or gesture–sign combinations were hardly used at all.

Figure 4. Mean number of utterances containing different components of manual communication that the KODA children produced during the three play sessions at the ages of 12, 18, and 24 months.
These results show that the KODA children were sensitive to the language of their interlocutors even within the manual modality. Depending on the interlocutor, the children used the manual modality differently, not only in quantity, but also qualitatively. Despite all the communicative capacity they had within the manual modality, they avoided using signs, in particular, when communicating with the hearing adult. Because the children's use of the manual modality was so differentiated, especially in the way they used gestures, it was necessary to analyze the children's use of gestures more closely.
Number and types of gestures
Our next task was to analyze further the way the children used gestures as a function of the language used by their interlocutor(s). As Figure 5 shows, the number of gestures the children produced differed by interlocutor type (F = 7.190, df = 2, p = .027). The KODA children produced, on average, 4.2 gestures per minute when playing with their Deaf parent. Of these gestures, the pointing gesture produced by the children when communicating with the Deaf adult occurred, on average, 2.8 times per minute. Clearly fewer pointing gestures were produced by the KODA children when communicating with the hearing adult (compared with the sessions during which they played with the Deaf parent); they occurred, on average, 1.4 times per minute. This difference in pointing rate was significant (Z = 2.400, p = .016).

Figure 5. Mean number of different gesture types produced by the KODA children during the three play sessions at the ages of 12, 18, and 24 months.
Additionally, when the children were 12 months old, other deictic gestures (showing, giving and requesting) comprised most of the gesture types during the play sessions shared with the Deaf parent and the hearing adult together, and also with the hearing adult alone. At the age of 18 months the number of iconic gestures the children produced for their Deaf parent was at the highest compared with any other data point or play session. Some of these iconic gestures might well have been so-called proto-signs, which do not yet meet the definition of actual sign language signs. The number of conventional (F = 9.294, df = 2, p = .010), iconic (F = 6.200, df = 2, p = .045), and pointing gestures (F = 7.763, df = 2, p = .021) differed according to the interlocutor type when all the data collection points (that is, the children's ages) were pooled together. The KODA children produced these gestures more when communicating with the Deaf parent than during other play sessions. All in all, these results show how the KODA children used the manual modality for representational purposes more often, and also in a slightly more diverse way, when communicating with their Deaf parent than with the hearing interlocutor. It is also noteworthy that the children increased their use of pointing gestures with age, regardless of the person they were communicating with.
Combinations of gestures, signs and words/vocalization
Our last research question about the KODA children's ability to accommodate their expressions according to the interlocutor(s) relates to combining of gestures with signs and words/vocalization. As mentioned, the KODA children we studied used the manual mode of communication more with their Deaf parent than with the Deaf parent and the hearing researcher, or with the hearing researcher alone. In addition to this, our results revealed that these children were able to use the manual modality effectively by combining gestures, signs and words in a multifaceted way. The KODA children we studied produced altogether seven different expression combinations containing one or more manual components. Observations associated with these combinations, averaged over the three play sessions and the three time points, can be seen in Figure 6. As Figure 6 shows, the most prevalent combination the children used in all three play sessions was gesture+word/vocalization, and the children also used it in a significantly different quantity with different interlocutors (F = 6.636, df = 2, p = .036). Sign-word/vocalization (F = 7.659, df = 2, p = .022), gesture+sign (F = 19.955, df = 2, p = .000), and gesture+gesture expressions (F = 12.061, df = 2, p = .002) were also combinations that were used differently as a function of different interlocutor(s) – and mostly at a higher rate when communicating with the Deaf parent alone. To sum up, compared with other play sessions, the KODA children used a richer spectrum of different combinations of gestures, signs and word(s)/vocalization when interacting with their Deaf parent.

Figure 6. Different combinations of gestures, signs, and words/vocalization produced by the KODA children during the three video-recorded play sessions. The figures represent mean numbers during each 5-minute session averaged over the video-recordings made at 12, 18, and 24 months of age.
After casting a glance over the overall results illustrated in Figure 6, we had a closer look at the situation at each data collection point separately, that is, as a function of time. It can be seen in Figure 7 that when the children communicated with their Deaf parent and the hearing adult together, the number of combination types increased slightly toward the end of the follow-up period reported here. However, the children never used purely manual combinations in their utterances when interacting with the hearing adult alone, whereas the use of gesture–sign–word/vocalization combinations increased along with increasing age. As an overall trend, the KODA children used a wider spectrum of combinations of different expression types when communicating with their Deaf parent. The number of various combination categories did not change during the data collection period. At the age of 12 months, the children already used these combinations effectively when communicating with the Deaf adult.

Figure 7. Mean number of different combinations of gestures, signs, and words/vocalization produced by the KODA children during the three video-recorded play sessions at 12, 18, and 24 months of age. The combination gesture+gesture+sign is not included in the figure because the mean number of its occurrences was so low at each data collection point that it was rounded to zero.
Discussion
We examined KODA children's early language differentiation by studying their productive vocabularies with the MCDI and their ability to accommodate expressions during play sessions as a function of the language used by their interlocutor(s). We then calculated correlations between their productive sign language and spoken language vocabularies, measured with the MCDI, and the number of spoken words and signs the children produced when communicating with their Deaf parent or the hearing adult. To obtain a better understanding of the gradual process of language differentiation at an early age, we additionally analyzed at three different ages the children's selection of communication mode(s) and, more closely, the use of gestures during three different play sessions; first with their Deaf parent, then with the Deaf parent and a hearing adult together, and finally, with the hearing adult alone.
Evidence of early language differentiation
We first discovered that the languages the children were learning were not always mastered at an equal level. This was seen in their vocabulary results measured with the MCDI. According to our results most of the children had a larger productive vocabulary in sign language than in spoken language at the ages of 12, 18 and 24 months.
When we pooled the data recorded at the ages of 12, 18 and 24 months together, we noticed that although the KODA children had language competence in both of the languages that they were acquiring – shown by their productive vocabulary measured with the MCDI – they already preferred to use the language of their interlocutor during this early age period. This finding has importance in terms of the dual language hypothesis proposed by Genesee (Reference Genesee1989); our statistically significant results implied that the bilingual KODA children we studied were acquiring their languages as separate systems from early on. Namely, our results suggest that as soon as children have some language competence, they can use their languages in a differentiated way.
We also found that the video-recorded play sessions reflected the children's language production skills realistically. This was seen in the strong correlations between the parent-reported vocabularies and the use of signs and words during the play sessions. Still, a somewhat stronger correlation existed between vocabulary size in the majority language – that is, spoken Finnish – and the use of speech with the hearing interlocutor than between FinSL vocabulary size and the use of sign language with the Deaf parent. Although the number of children we studied was small, this tendency may be an indication of social and environmental influences, like the stronger status of the majority language and language choice of the parents, which may affect the children's language use in different contexts.
As mentioned, not only language status in the community but also the amount of exposure to languages affects bilingual language development. In our study, only three children (Lauri, Onni, and Paula) out of eight had weekly contacts with other Deaf people. Their productive vocabulary in sign language, measured with the MCDI, was larger than in spoken language. The others received majority language input both at home (as five children had one hearing parent) and outside their home (in day care and via contacts with hearing relatives and other hearing persons). This may have decreased their sign language use.
Previous studies have suggested that there is wide diversity in language use in Deaf-parented families. For example, language mixing by Deaf parents is a widely reported phenomenon (Kanto et al., Reference Kanto, Huttunen and Laakso2013; Mallory et al., Reference Mallory, Zingle and Schein1993; Petitto et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001; Preston, Reference Preston1994, p. 128; Van den Bogaerde & Baker, Reference Van den Bogaerde and Baker2008; Wilhelm, Reference Wilhelm2008). Due to these factors, KODA children are aware of the possibility of spoken language use by signing persons. Deaf persons may also use speech reading to some extent. Some KODA children receive input and modeling in language mixing from their Deaf parents who use simultaneous production of both signs and speech, as did the parents of Onni, Miina, and Ari in our study. Although deaf persons may exploit spoken language, hearing persons master sign language only very seldom. This phenomenon may at least partly explain our finding that the correlation between vocabulary size and language use was slightly stronger in spoken language than in FinSL. Additionally, the correlation between sign language vocabulary and the number of signs produced during the play session were stronger in our study when the three children whose parents used language mixing were dropped from the analysis. The same notion of a strong majority language input was also reported by Paradis and Nicoladis (Reference Paradis and Nicoladis2007), who studied children acquiring English and French. They concluded that because of bilingual parents and, in their study setting, also the surrounding community, the English-dominant children seemed to understand the possibility that they can also use English if needed in a French context because almost everyone in the French-speaking population also spoke English. Conversely, the French-dominant children did not use English very much in French contexts because it was not necessary for them. As was seen, there are many factors that make children choose one language over the other, which complicates bilingual children's appropriate use of languages according to the interlocutor.
The fact that these KODA children's language acquisition is bimodal and bilingual enables us to identify the communication mode(s) and the language(s) a KODA child is producing in each utterance, and to capture children's early ability to accommodate expressions according to the language of their interlocutor(s). Because of this special group of children, we had the possibility to study language differentiation in a phase where children are not seen to have enough lexical resources to show their differentiated use of lexical expressions. Namely, although Deuchar and Quay (Reference Deuchar and Quay2000) suggested that children growing up bilingual need to have over 100 words in their productive vocabulary before they can appropriately show differentiated language use, the eight KODA children we followed up already showed a clear sensitivity to the language of their interlocutor(s) at the age of 12 months, when they had only a modest productive vocabulary in both of their languages. This sensitivity was reflected in the way the children accommodated their use of different communication modes. The differentiated use of communication modes according to the interlocutor(s) increased as the children grew older.
As mentioned earlier, our results give support to the dual language system hypothesis proposed by Genesee (Reference Genesee1989). KODA children growing up bilingual can separate their languages already in early phases of language development by coordinating their language use not only when interacting alone with a Deaf or hearing person but also even when both interlocutors are present. Findings on the children's ability to accommodate their languages on the fly have also been presented by Genesee et al. (Reference Genesee, Nicoladis and Paradis1995) with bilingual children acquiring two spoken languages. This was seen in the children's ability to select the appropriate modality for their utterances according to the language of their interlocutors. It has been challenging to study the language differentiation of young children in studies on children acquiring two spoken languages. This is because spoken language utterances of children under two years of age are often too unintelligible for researchers to identify the language the young children are using at each moment (Deuchar & Quay, Reference Deuchar and Quay2000; Nicoladis & Genesee, Reference Nicoladis and Genesee1996).
Bilingual children's language use with their parents has been argued to be a matter of habitually using the language which has been learned as a part of daily communication (e.g. Genesee et al., Reference Genesee, Boivin and Nicoladis1996). However, Genesee et al. (Reference Genesee, Boivin and Nicoladis1996) noticed that four bilingual children who were, on average, 26 months old could also select the appropriate language (one of the children's native languages) when communicating with strangers. Our study confirmed this finding by showing that as early as 12 months of age, bimodal and bilingual KODA children were already able to accommodate their language use according to the language used by their interlocutor(s). This applied despite the fact that the hearing researcher who interacted with the children in 37 of the 42 video-recorded sessions was a stranger to them and the children were aware of her competence in sign language. In fact, the children were more consistent in using solely spoken language with the hearing adult than sign language with the Deaf parent. As language mixing by the parents has been reported to affect language mixing by a bilingual child (Petitto et al., Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001; Van den Bogaerde & Baker, Reference Van den Bogaerde and Baker2008), it is possible that due to three parents’ language mixing habits, the KODA children in this study used a mean number of 8.6 spoken words when communicating with the Deaf parent but only 0.7 signs when interacting with the hearing adult.
Differentiated gesture use
One-year-old children are generally able to use both vocal and manual modalities to communicate in a symbolic manner. Due to the different roles of a manual modality and gestures in conversations held in sign language and spoken language, we were interested in analyzing if the language differentiation of KODA children might also be seen in their use of gestures.
In our second research question we analyzed the KODA children's expressions of items representing, in addition to the lexical level, the pre-lexical level. The KODA children we studied showed a different profile in exploiting a manual modality during the three different play sessions. When communicating with their Deaf parent, the children used a manual modality more often (as indicated by the number of gestures and signs used) and in a more diverse way (as indicated by the number of different types of gestures and gesture, sign and word combinations) compared with the situation when they were interacting with the hearing adult. This result was expected, as KODA children who have signing parents learn linguistic units in a manual modality, and the use of this modality becomes increasingly important over the course of sign language development. However, when communicating in spoken language, the manual modality is mainly used only to support spoken language expressions. Thus, along with spoken language development, the use of the manual modality decreased slightly over time and the KODA children started using a vocal modality more when communicating with the hearing person. Additionally, the number of gestures – especially pointing and iconic gestures and gesture–sign–word/vocalization combinations – was clearly higher when interacting with the Deaf parent compared with the sessions during which the children were communicating with the hearing adult. These results show that the language differentiation of KODA children can already be seen in the way they use gestures. This may indicate that children are already in the process of conventionalizing gestures toward the linguistic structures of a sign language (see also Hoiting & Slobin, Reference Hoiting, Slobin, Duncan, Cassell and Levy2007; Petitto, Reference Petitto and Kessel1988).
Our results are consistent with those of Capirci et al. (Reference Capirci, Iverson, Montanari and Volterra2002) and Morgenstern et al. (Reference Morgenstern, Caët and Collombel-Leroy2010), who noticed that at the age of one to two years the exposure and acquisition of sign language already affect the children's use of gestures and other ways of using a manual modality. Because they received input from sign language, which has rich gestural properties, the KODA children we followed up used a manual modality and gestures more often and they did so in a more diverse way than children generally do. Although the children used clearly fewer gestures (a mean of 2.2 per minute) with the hearing adult than with their Deaf parent (a mean of 4.2 per minute), they still used distinctively more gestures than monolingual Finnish children are reported to produce in the same age period. For instance, pointing occurred, on average, 1.4 times per minute when the KODA children were communicating with the hearing adult, while the six children Jakkula (Reference Jakkula2002) video-recorded once a month produced one pointing every two minutes (i.e. 0.5 pointing gestures per minute) when they were playing with their mothers at the ages of 12 and 24 months. Laakso, Helasvuo and Savinainen-Makkonen (Reference Laakso, Helasvuo and Savinainen-Makkonen2010) followed up four Finnish children who pointed, on average, only 0.1 times per minute when communicating with their mothers from the age of nine months up to the age of 14 months. Their pointing frequency was also clearly lower than that of the American and Italian children (0.83 and 0.57, respectively) studied by Iverson et al. (Reference Iverson, Capirci, Volterra and Goldin-Meadow2008).
Gesture use has also been studied in adults using sign language. Casey and Emmorey (Reference Casey and Emmorey2009) found that when communicating with non-signing persons, bimodal and bilingual adults (n = 13) did not use a greater number of gestures than non-signing English speakers (n = 12), but their gestures included a greater variety of different handshapes. They proposed that this finding was based on a cross-linguistic transfer where American Sign Language (ASL) was activated to some extent when these bimodal and bilingual adults spoke English. This interesting finding is related to a conception according to which both languages of bilingual persons are activated at all times and bilinguals borrow different constructions, for example, phonological, morphological, syntactic or even gestures, from one language to another (Nicoladis & Genesee, Reference Nicoladis and Genesee1996). In line with this finding, Casey and Emmorey (Reference Casey and Emmorey2009) predicted that ASL--English bilingual children may use gestures at a rate equal to that of native English speakers. However, our findings showed that the KODA children we studied did use a higher number of pointing gestures than would have been expected on the basis of previous studies on non-signing monolingual children acquiring Finnish. Further studies are needed to find out how much the immediate language learning environment affects KODA children's gesture use even though they otherwise live in a low-gesture society (see e.g. Pika, Nicoladis & Marentette, Reference Pika, Nicoladis and Marentette2006). Additionally, we studied children who were in an early phase of language development. Young children may also use gestures to partly aid their spoken language expressions. This has been noted in many studies of gesture use among monolingual and bilingual children (see e.g. Nicoladis, Pika & Marentette, Reference Nicoladis, Pika and Marentette2009; Nicoladis, Mayberry & Genesee, Reference Nicoladis, Mayberry and Genesee1999; Sherman and Nicoladis, Reference Sherman and Nicoladis2004).
Further studies are needed
Exploration of the ability to accommodate language use according to the language of the interlocutor(s) provides knowledge not only about the linguistic competence of bilingual children but also about their general cognitive skills and pragmatic differentiation of languages. In investigating this specific ability of bilingual children, one needs to identify when and how bilingual children accommodate their behavior according to the specific demands of bilingual communication. Although our results were convincing, it is necessary to study with a higher number of participants whether children have an ability to already show pragmatic differentiation in the pre-lexical period before they produce their first words. This question has both theoretical and practical importance not only for understanding the mechanisms of early bilingualism but also for the debate between the language acquisition theories on acquisition of pragmatic skills and communicative competence of children (Comeau et al., Reference Comeau, Genesee and Lapaquette2003). KODA children's development in even earlier phases than 12 months of age is a valuable source of information when early language differentiation among bilingual children is studied.
More in-depth analyses are also needed on how the languages of one or more interlocutors affect bilingual children's language use, as research on this topic is still somewhat limited. We noticed that language mixing by the Deaf parent affected the use of signs and words by their hearing child, but more comprehensive studies with a larger number of participants are needed to analyze the language mixing patterns of KODA children and their interlocutors. Additionally, we noticed that the KODA children used gestures differently with different interlocutors and that the number of gestures they produced was higher than those published on monolingual Finnish children (Jakkula, Reference Jakkula2002; Laakso et al., Reference Laakso, Helasvuo and Savinainen-Makkonen2010). These findings may lend some support to the conception that in this specific group of children with the languages they are acquiring, children's use of gestures is affected by input, linguistic factors and the communication culture of their immediate environment. However, more research is clearly warranted to acquire a fuller understanding of the relationship between language, culture and gestures, and how bilingualism may affect the role of gestures during bilingual language acquisition.