Individuals with autism spectrum disorder (ASD) have key deficits in social skills and related communication and cognition, which may result in difficulties with conversational skills (American Psychiatric Association, 2013). The need to consider the auditory and physical cues and the lack of predefined rules make conversation a difficult task to master. It is possible to teach skills such as requesting or commenting (Banda, Copple, Koul, Sancibrian, & Bogschutz, Reference Banda, Copple, Koul, Sancibrian and Bogschutz2010; Charlop, Dennis, Carpenter, & Greenberg, Reference Charlop, Dennis, Carpenter and Greenberg2010; Shillingsburg, Valentino, Bowen, Bradley, & Zavatkay, Reference Shillingsburg, Valentino, Bowen, Bradley and Zavatkay2011) but the ‘to-and-fro’ nature of conversation involves more sophisticated skills where the participant is required to pay attention to the topic at hand, initiate a desired topic, maintain the topic, and shift from a topic, all while being engaged with one or more partners (Carpenter & Tomasello, Reference Carpenter, Tomasello, Wetherby and Prizant2000; Klinger & Williams, Reference Klinger, Williams, Mayer, Van Acker, Lochman and Gresham2009; Twachtman-Cullen, Reference Twachtman-Cullen, Wetherby and Prizant2000).
Typically developing individuals acquire conversational skills without being explicitly taught, with everyday social interactions being sufficient to develop these social skills (Carpenter & Tomasello, Reference Carpenter, Tomasello, Wetherby and Prizant2000; Klinger & Williams, Reference Klinger, Williams, Mayer, Van Acker, Lochman and Gresham2009). It is, however, unlikely that individuals with ASD would develop these abilities without instruction in each composite skill. Furthermore, individuals with autism often fail to respond to the interactive emotive cues presented by others either facially or through speech intonation (Church, Alisanski, & Amanullah, Reference Church, Alisanski and Amanullah2000; Hobson, Reference Hobson1986). Although children with autism are able to read emotions such as happiness, they are less able to read more nuanced forms of emotion or communicative intent, such as doubt, agitation, fright, and irony, as these require a more holistic interpretation of the cues provided (Ricks & Wing, Reference Ricks and Wing1975; Rump, Giovannelli, Minshew, & Strauss, Reference Rump, Giovannelli, Minshew and Strauss2009; Wang, Lee, Sigman, & Dapretto, Reference Wang, Lee, Sigman and Dapretto2007). A lack of social cognition or perspective taking in individuals with ASD may also be a hindrance to their ability to have a conversation (Dixon, Tarbox, & Najdowski, Reference Dixon, Tarbox, Najdowski and Matson2009). As interpersonal interactions hinge on the communication partner's state of mind, interactions may be impaired if a conversation partner is unable to predict the interest of the other person (Baron-Cohen, Reference Baron-Cohen1995; Klinger & Williams, Reference Klinger, Williams, Mayer, Van Acker, Lochman and Gresham2009). Any number of the impairments mentioned above may interfere with the ability to conduct effective social interactions, but the need for prompts and the production of bizarre or off-topic comments affect the fluidity and reciprocity of conversations with individuals with ASD (Losh & Capps, Reference Losh and Capps2003). As social communication, cognition, and interaction are areas of deficit for individuals with ASD, it follows that they would also find the exchange of social information difficult.
The ability to converse is an important skill for individuals with ASD to master, as this is a common way to exchange ideas and develop social connections (Loveland & Tunali-Kotoski, Reference Loveland, Tunali-Kotoski, Volkmar, Paul, Klin and Cohen2005). A weakness in conversing fluently or a reticence to engage in verbal exchanges could lead to social isolation and social withdrawal (Shattuck, Orsmond, Wagner, & Cooper, Reference Shattuck, Orsmond, Wagner and Cooper2011). Furthermore, Bauminger, Shulman, and Agam (Reference Bauminger, Shulman and Agam2003) posit that although individuals with higher-functioning autism may demonstrate a greater number of social interactions, the quality of the exchange may be weakened by the peculiarity of the interaction leading to a greater feeling of loneliness. A lack of social aptitude, contributed to by a conversational impairment, may have an impact on future success (Gerhardt & Holmes, Reference Gerhardt, Holmes, Volkmar, Paul, Klin and Cohen2005), thus it is important to teach conversation skills within therapeutic interventions.
Although a number of interventions (e.g., video modelling, role-playing, Social Stories) have been used to teach components of conversation skills, one intervention that has some reported success in teaching conversational skills to individuals with ASD is script training (Brown, Krantz, McClannahan, & Poulson, Reference Brown, Krantz, McClannahan and Poulson2008; Charlop-Christy & Kelso, Reference Charlop-Christy and Kelso2003; Kyparissos, Reference Kyparissos1996; Tomaino, Reference Tomaino2011; Wichnick, Vener, Pyrtek, & Poulson, Reference Wichnick, Vener, Pyrtek and Poulson2010). Scripts can either be auditory or visual and have been used successfully to teach skills such as initiations and responses. Scripts are also often used as a prompting strategy in social skills interventions (Sng, Carter, & Stephenson, Reference Sng, Carter and Stephenson2014). Prompting has been used effectively to teach discrete communication skills to individuals with autism (Goldstein, Reference Goldstein2002), and more recently, handheld tablet devices have been used to assist individuals with developmental disabilities gain independence through self-prompting (Stephenson & Limbrick, Reference Stephenson and Limbrick2015). Thousands of apps have been designed for educational use and their developers purport that these apps can facilitate the acquisition of skills in individuals with disabilities. Possible reasons for the interest in the use of handheld touch-screen devices like the iPad are the portability, the visual delivery of content, and the potential to customise the apps for the user (Blood, Johnson, Ridenour, Simmons, & Crouch, Reference Blood, Johnson, Ridenour, Simmons and Crouch2011; Van Laarhoven, Johnson, Van Laarhoven-Myers, Grider, & Grider, Reference Van Laarhoven, Johnson, Van Laarhoven-Myers, Grider and Grider2009). In addition, tablet devices can provide both visual and auditory cues as part of a teaching application. Apps that are appropriately designed may also provide students with additional practice in the skills being taught without the need of specialist teacher supervision therefore maximising instructional efficiency.
Commercially available portable touch-screen tablet devices are multifunctional and relatively cost effective, unlike their predecessors (Douglas, Wojcik, & Thompson, Reference Douglas, Wojcik and Thompson2012; Fernandez-Lopez, Rodriguez-Fortiz, Rodriguez-Almendros, & Martinez-Segura, Reference Fernández-López, Rodríguez-Fórtiz, Rodríguez-Almendros and Martínez-Segura2013). Further, many commercial devices adhere to the principles of universal design, making them accessible to most people without customisation (Cihak, Kessler, & Alberto, Reference Cihak, Kessler and Alberto2007). In addition, the proliferation of these devices among the general population makes it less conspicuous when a person with a disability relies on it as an instructional tool or for communicative purposes (Cihak et al., Reference Cihak, Kessler and Alberto2007; Gentry, Wallace, Kvarfordt, & Lynch, Reference Gentry, Wallace, Kvarfordt and Lynch2010; Lorah et al., Reference Lorah, Tincani, Dodge, Gilroy, Hickey and Hantula2013).
Reviews have been conducted to evaluate the results of studies that have used handheld touch-screen devices to deliver instructional programs to individuals with a developmental disability (Kagohara et al., Reference Kagohara, van der Meer, Ramdoss, O'Reilly, Lancioni, Davis and Sigafoos2013; Stephenson & Limbrick, Reference Stephenson and Limbrick2015), and although results are positive, the number of studies remains limited. In addition, Stephenson and Limbrick (Reference Stephenson and Limbrick2015) noted a possible bias in the results as a number of the studies available were conducted by an associated group of researchers. More notably, there is a paucity of research providing empirical evidence on the efficacy of specific apps, and very few studies have been conducted on individuals below 12 years of age. None of the studies reviewed focused on conversation or social verbal skills.
Although some apps have the advantage of presenting multiple cue modalities (e.g., presenting text and auditory cues simultaneously), few studies have been done on their efficacy. There are a large number of apps available commercially, and developers often claim that the use of these applications can assist in the acquisition of skills. Given the preceding reasons, the primary purpose of this pilot study was to investigate whether an intervention using the Conversation Coach iPad app would improve an individual's ability to maintain a conversation through offering on-topic responses to questions or requests for information. Second, if the intervention was successful, it was of interest to determine whether the learned skills would generalise to paraphrased scripts presented on the iPad and to natural conversations with different conversation partners.
Method
The research was approved by the ethics committee of Macquarie University (approval number 5201300450) as part of the ongoing educational program of students at the school where the research was carried out. Written informed consent was obtained from the participant's parents prior to the commencement of the study. Information was provided to the participant's parents about the aims and purpose of the study, and assurances were given that the participant's personal data would be kept anonymous.
Research Design
A multiple baseline design with probes across conversation scripts was employed. Generalisation probes were planned across varied conversational wording on the iPad and conversational partners in a natural context without the presence of the iPad. At the conclusion of the intervention, additional generalisation probes to a natural conversation were conducted with the following conversation partners: (a) the student's classroom teacher on untaught conversation topics, (b) a different known teacher on taught topics and one untaught topic, (c) an unknown teacher (casual teacher) on one taught and one untaught topic, and (d) a similar-aged peer on a taught topic.
Setting
The research was conducted in demonstration classes in a special school for children with disabilities at a university research centre. Teaching sessions on the iPad were conducted in a small classroom or a visitor observation room. Both rooms were sparsely furnished with a table and two chairs. Natural conversations to measure generalisation with adult partners were conducted in the playground, and the conversation with a peer was conducted in a large classroom.
Inclusion Criteria
The inclusion criteria for the pilot study were (a) student could read the scripts presented on the iPad, and (b) the student did not stay on the initiated topic in a conversation or offer relevant responses to questions. Classroom teachers and the school speech pathologist were consulted about student suitability and several possible participants were suggested. The researcher initially screened three participants for their suitability by initiating conversations with them in the playground or in class. These interactions were recorded on a handheld audio recording device by the researcher and then transcribed. The student who stayed on-topic the least and provided the fewest relevant answers to social questions during the initial screening was selected.
Participant
The participant, Kenny (a pseudonym), was a male aged 7 years 11 months who was diagnosed with autistic disorder, according to DSM-IV (American Psychiatric Association, 2000) criteria, at a multidisciplinary diagnostic clinic. Kenny was assessed on the Stanford-Binet Intelligence Scales – Fifth Edition (Roid, Reference Roid2003) with a nonverbal IQ of 57, verbal IQ of 59, and a full scale IQ of 56. The Childhood Autism Rating Scale (CARS-2; Schopler, Van Bourgondien, Wellman, & Love, Reference Schopler, Van Bourgondien, Wellman and Love2010) and Vineland Adaptive Behavior Scale (VABS-II; Sparrow, Cicchetti, & Balla, Reference Sparrow, Cicchetti and Balla2005) were completed by his classroom teacher. Kenny scored 34.5 on the CARS-2, indicating a mild to moderate degree of autism, and returned an Adaptive Behavior Composite standard score of 68, indicating a mild deficit on the VABS-II. A full summary of results on the VABS-II is presented in Table 1. According to his teacher, Kenny had a good vocabulary, typically spoke in four- to eight-word sentences, could greet, comment, and participate in short exchanges, but found it difficult to stay on-topic and offer relevant answers to social questions. In particular, Kenny tended to frequently answer questions inappropriately with references to the animated television series The Simpsons, about which he had a special interest. During the screening process Kenny was provided with 15 opportunities to have a conversational turn, and gave a coherent and on-topic response on two occasions. One was a response to a greeting after a verbal prompt and the other was a response to a closed question. He talked about The Simpsons (off-topic) for six turns and the remaining seven returns were also not on-topic.
TABLE 1 Summary of Vineland-II Scores
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170605081659455-0898:S1030011216000063:S1030011216000063_tab1.gif?pub-status=live)
Materials
Information about the app
Several conversation apps available for the iPad were examined. The Conversation Coach by Silver Lining Multimedia was selected because of its flexibility in the development of scripts. The app was installed on an iPad with retina display. This app allowed the instructor to load a series of scripts on the iPad and present the content with a range of options. For the purposes of this study, only the ‘practice’ mode was used. In practice mode the iPad takes on the role of a conversation partner. On activation a ‘player’ and script were selected from a predetermined list. When a script was selected, the initiation screen appeared, and the audio recording of the scripted initiation played automatically. Subsequent screens were presented automatically once an audio file (initiation/selected student response/partner response) was played in full. A screenshot of an initiation on the iPad and the corresponding student response options are presented in Figure 1. Each text script was loaded into the app and paired with audio recordings. The initiation and partner responses were recorded onto the iPad by the researcher and the response options for the participant were recorded by a same-gender, similar-aged peer. Response options were presented as words on the screen, requiring the participant to read the options before making a selection. Each response option appeared in text with an icon approximately 3 cm2 in size, positioned to the left of the text. The participant made a selection by touching the sentence or the icon and the corresponding recording played. Once the audio of the selected student response finished playing, the iPad automatically showed the conversation partner's next line in the script as text and played the audio automatically. After the conversation partner's script was played in full, the next response screen appeared, and the student made a selection from the options displayed. The screen with options remained on display until the student chose a response. This process continued until the end of the predetermined script.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170605084627-63097-mediumThumb-S1030011216000063_fig1g.jpg?pub-status=live)
FIGURE 1 Initiation and Corresponding Student Response Options as Presented on the iPad.
Test scripts
These scripts were used during baseline and prior to each teaching session during intervention to test for intervention effect. Test scripts on three different topics — (a) going to the beach, (b) a unit of work on mini beasts, and (c) information about Kenny's family — were developed for the participant. Each script began with an initiation by the adult conversation partner, followed by four conversational turns for the student, and ended with a termination by the adult partner. Test scripts were loaded onto the iPad in text and audio format. Probes during the intervention phase were conducted daily prior to a teaching session. A sample script is presented in Table 2. For each student conversational turn, the iPad presented the participant with three response options. One response was on-topic and appropriate. Another response option was on-topic but was an inappropriate or incomplete response, and the remaining option was obviously incorrect (i.e., not on-topic). For example, the on-topic and appropriate response to ‘Do you like going to the beach?’ was ‘Yes, I go to the beach with Mummy’, the inappropriate or incomplete response was ‘Manly is near the beach’, and an off-topic response would be ‘I like Homer, Marge, and Bart Simpson’.
TABLE 2 Sample Script With Prompts
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170605084627-19492-mediumThumb-S1030011216000063_tab2.jpg?pub-status=live)
Teaching scripts
The scripts used for teaching sessions were identical to the test script described in the previous section that was used during baseline with one important exception. Embedded prompts were included in the teaching scripts to allow for error correction. If the participant selected an incorrect response, the iPad prompted the participant to ‘listen to the question’ or ‘try again’ and repeated the question or the request for information. The participant was then given another opportunity to make another selection. The previously selected incorrect response was eliminated from the choices available until the correct response was made. This meant that the student could make two consecutive errors on a particular conversational turn before only the correct option was available for selection. Please note that the teacher was present during these teaching sessions but remained silent and did not intervene at any time after turning the app on. The error correction procedure was included to allow the participant to choose the correct response.
Generalisation scripts on the iPad
Three paraphrased wording generalisation scripts to be used on the iPad were also developed. These scripts were on the same topic as the teaching script but with paraphrased wording for the teacher or student responses. Corrections and feedback were not included in the generalisation probes conducted on the iPad. For example, the teaching script response ‘Yes, I go to the beach with Mummy’ was paraphrased to ‘I like the beach. Mummy takes me’, and the partner response ‘I like mini beasts. What is your favourite mini beast?’ was paraphrased to ‘I like mini beasts. Tell me your favourite one’.
Generalisation scripts without the iPad
Natural conversation generalisation scripts were identical to the probe scripts used on the iPad for all three topics (going to the beach, mini beasts, and student's family). The difference was that they were delivered verbally by a partner, rather than on the iPad, and required a verbal response from the student. As these probes were conducted in the playground without the iPad, no response options were presented to the student. The student could either verbally produce the learnt response from memory or offer an unscripted response of his own.
Additional generalisation scripts
Towards the end of the intervention five additional scripts were developed to test for generalisation to untaught topics under natural conversation conditions with a real conversation partner. The additional topics for generalisation were (a) what the student had for dinner, (b) Australian animals, (c) the book week parade, (d) going to the snow, and (e) playing on the computer. These topics were not explicitly taught during the intervention but related to the student's interests and experience. Response options were not provided to the student, so all responses were unscripted and generated by the student. Probes were carried out with a range of conversation partners in the playground or classroom setting.
Dependent Variables
Correct responses during intervention probes and generalisation probes to paraphrased scripts using the iPad were defined as the participant independently selecting and therefore activating the preprogrammed speech for the appropriate response. In the event the student selected one response followed shortly by a different one, the first response was scored and the second response disregarded. During natural conversations, generalisation probes with a person were scored as correct if the verbal response was judged to be on-topic.
The verbal exchanges for generalisation to natural conversations with a person (i.e., without iPad) were scored for scripted and unscripted verbal responses. Responses were considered scripted if the response had been taught in any part of the whole conversation and if it did not provide additional or different information, even if wording was altered or information omitted. For example, if the taught response was ‘I like swimming at the beach with Mummy’, the student response ‘There are sandcastles with Jimmy and Billy’ was scored as scripted because one of the other taught responses within that conversation was ‘I build sandcastles with Jimmy and Billy’. Unscripted responses were verbalisations that were on-topic and provided additional or different information. For example, in response to the question ‘Do you like going to the beach?’, the taught response was ‘Yes, I go to the beach with Mummy’ and Kenny said, ‘Yes! I love going to the beach with Mummy!’
Procedures
Sessions beginning with intervention probes followed by intervention trials were run daily. Each session was audio recorded on a handheld recording device for scoring at a later time and for interrater reliability. The experimenter and the student sat on opposite sides of a table with the iPad between them for all baseline, teaching, and generalisation probes on the iPad. The order of presentation of the student responses during intervention probes on the iPad was manually changed eight times for topic A, three times for topic B, and twice for topic C, as the app did not automatically present the responses in random order. This was done to avoid the student choosing responses on the basis of presentation order. Individual procedures for each experimental phase are listed as follows. See Table 3 for a summary of the topics and the probes conducted for each phase of the study.
TABLE 3 Summary of Intervention Phases and Topics
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170605084627-44842-mediumThumb-S1030011216000063_tab3.jpg?pub-status=live)
Baseline
Kenny was seated across the table from the researcher. During baseline probes, the teaching script was used but without embedded error correction. The conversation script continued as if a correct response was selected even if an incorrect response was selected. No additional instruction was given.
Intervention
Prior to each teaching session, a probe was conducted to check for intervention effect. The script was administered without error correction, identical to baseline conditions. If the student selected an inappropriate response, the script would continue as if the correct response was selected. A teaching session was conducted after each probe. The researcher activated the iPad to initiate the scripted exchange with error correction enabled. The script was played once and the teacher did not make additional comments or provide additional unscripted prompts during the teaching session. Verbal praise was provided to the student for effort and perseverance at the end of the teaching session. Daily teaching sessions continued until a clear intervention effect was evident.
Generalisation to paraphrased wording on the iPad
Probes for generalisation to paraphrased wording scripts were conducted at the end of the intervention phase. These probes were conducted in the same manner as baseline probes on the iPad.
Generalisation to natural conversation on taught topics
Planned natural conversation generalisation probes on each of the taught topics were initiated in the playground during scheduled breaks by the student's classroom teacher (who was not involved in the teaching phase) in both baseline and intervention phases. The teacher used the same wording as the relevant teaching script but without the presence of the iPad, thus the student was required to respond verbally. These sessions were audio recorded and transcribed.
Additional generalisation to other people on taught topics
At the commencement of the true baseline phase for the third topic, it was clear that experimental control had been compromised as Kenny scored 100% correct before the intervention phase began for that topic. As a result, additional generalisation probes were planned and conducted. Generalisation probes to natural conversation on taught topics were conducted in the same manner with a different known teacher (not classroom teacher) on all three taught topics. Further, an unknown adult (casual teacher in the playground) and a peer (in a large classroom not used for intervention) each served as a conversation partner in a natural conversation on a taught topic.
Additional generalisation to natural conversation on novel topics
Scripts were also developed on five new and untaught topics. These probes were conducted after the conclusion of the last intervention phase. As these probes were unplanned at the beginning of the intervention, no data could be collected during baseline. The student's classroom teacher initiated a conversation with the student on each of the five untaught topics. A known teacher and an unknown teacher (a casual teacher who had not had previous contact with the student) separately initiated a conversation on one untaught topic. All natural conversations with untaught topics were conducted in the playground.
Reliability
Interobserver agreement was calculated for 33% of baseline and intervention probes and 31% of generalisation probes in natural conversations. A second rater listened to randomly selected audio recordings. An agreement was scored when both the first author and the second rater identically scored the responses in the audio recordings. Interobserver agreement was calculated by dividing the number of agreements by the number of agreements plus the number of disagreements and multiplying by 100. For baseline and intervention probes, the interobserver agreement was 100%. The interobserver agreement for generalisation probes in natural conversations was 100%. The intervention was completely automated on the iPad so procedural reliability data was not collected.
Results
Baseline and Intervention Probes
During baseline for the first topic, the student did not make any correct responses. Once the intervention phase commenced, an immediate intervention effect was seen. Baseline for topic B was more variable, but there was reasonably clear evidence of an intervention effect once intervention began on this topic. Initial probes during the baseline phase for topic C indicated that experimental control could not be demonstrated as Kenny scored 100% on three of four probes. Baseline and intervention data for all baseline and the subsequent intervention phases are presented in Figure 2.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170605084627-29078-mediumThumb-S1030011216000063_fig2g.jpg?pub-status=live)
FIGURE 2 Data for Intervention and Planned Generalisation Probes.
Generalisation Probes
Probes with paraphrased scripts on the iPad
Kenny did not respond appropriately to the paraphrased scripts on the iPad during the baseline phase for topic A, but there was some evidence of generalisation during the intervention phase. Anomalously, Kenny scored better on the paraphrased script generalisation probe for topic B than for the teaching script in the single generalisation probe. He continued performing at the same level (75%) during the intervention phase. Kenny also scored 75% in the equivalent generalisation probe on topic C, and a subsequent probe at the conclusion of the project showed a score of 100%.
Generalisation probes to a natural conversation
Kenny did not sustain a conversation with his classroom teacher prior to the commencement of intervention with the first topic. He scored zero in a natural conversation with his classroom teacher using the teaching script for topic A at the beginning of the intervention. At the end of the first intervention phase, he stayed on-topic and produced a verbatim response in a natural conversation with his classroom teacher. The baseline probe for generalisation to a natural conversation with his teacher on topic B scored 50%. At the conclusion of the intervention phase for topic B, his score during generalisation probes to a natural conversation had reached 100%; once again, he was providing the same responses as the script taught on the iPad. During probes for the third topic, Kenny scored 100% both in the initial true baseline phase for topic C and at the conclusion of the research.
Additional Generalisation Probes
Taught topics
As the experimental design was compromised with the loss of control during the true baseline for script C, additional generalisation probes to a natural conversation were conducted on topics that had been taught during the intervention phase with (a) a known teacher, (b) an unknown teacher, and (c) a peer (see Table 4). Kenny stayed on-topic 100% of the time on each of the three taught topics with the known teacher as well as with an unknown teacher and a peer. The responses Kenny gave in natural conversation on taught topics were transcribed and assessed for novelty. The majority of the responses were scripted; only 13% of the responses were considered unscripted. Although he did not provide many unscripted responses (i.e., with the addition of new information), he did not provide verbatim responses each time. The majority of responses were a slight structural variation on the script taught; for example, he would say, ‘My family was the Johnson family’ when the taught response was ‘Mummy, Daddy, Bess, Kenny, and Sean are the Johnson family’. An analysis of the scripted responses provided showed that only 34% were verbatim to the taught script. He provided additional information on three occasions, and on another occasion he provided a scripted response from another topic and used it in a different conversation. Kenny replied, “Families are cool!” when the conversation was about his family (topic C), and one of the taught responses for topic B was ‘Stick insects are cool!’
TABLE 4 Percentage of Correct or On-Topic Responses in Generalisation Probes to Natural Conversations
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170605081659455-0898:S1030011216000063:S1030011216000063_tab4.gif?pub-status=live)
Untaught topics
The results of natural conversation generalisation probes to untaught topics were surprising. Kenny was tested on his ability to stay on the topic during natural conversations involving five untaught topics with his classroom teacher, one untaught topic with a known teacher, and one untaught topic with an unknown adult (a casual teacher). He was on-topic 100% of the time for three conversations, 75% of the time for another three conversations, and 50% for one. As only the partner part of these conversations was scripted and no teaching was conducted, all responses by Kenny were considered unscripted. None of the responses made during natural conversations to untaught topics resembled responses to taught scripts.
Discussion
In this study, we sought to investigate whether the scripts taught using the Conversation Coach app were effective in teaching a student with ASD to provide obligatory responses and stay on-topic during conversations. The results of this pilot study add to the emerging research on the use of handheld devices to teach social skills to individuals with ASD. To date there have not been any studies that specifically target a commercial app to teach conversational skills. The focus of previous research into this area has been on the ability to provide responses to social questions (Lee, Reference Lee2006; Sansosti & Powell-Smith, Reference Sansosti and Powell-Smith2008; Sherer et al., Reference Sherer, Pierce, Paredes, Kisacky, Ingersoll and Schreibman2001). The results show that there was a clear intervention effect for topic A and topic B indicated by an increase in correct responses during daily intervention probes prior to teaching sessions but that experimental control was compromised in the final baseline.
The original research design was based on the assumption that generalisation would be limited across scripts. Overall, there was a surprising degree of generalisation by Kenny, and it is possible that this unexpected level of generalisation (including generalisation across untaught scripts) contributed to the loss of control. Although some researchers implementing similar designs have interpreted similar results as generalisation to untaught conversations (Charlop, Gilmore, & Chang, Reference Charlop, Gilmore and Chang2008; Charlop & Milstein, Reference Charlop and Milstein1989; Charlop-Christy & Kelso, Reference Charlop-Christy and Kelso2003), it is probably more appropriately viewed as a loss of experimental control. In hindsight, a multiple baseline across participants design may have been a more suitable experimental design for the present study. In addition, given the loss of control, it is possible that other factors may have contributed to the observed loss of control in the third baseline. For example, a range of social and communication skills are addressed in the regular classroom program and these may have had some impact.
Additional generalisation probes were designed and conducted to assess for the level of generalisation after experimental control had been lost. The degree of generalisation to untaught topics was surprising given that the student only stayed on-topic for two out of 13 turns in assessment prior to baseline. In particular, the additional probes conducted on untaught topics in natural conversation with known and unknown partners showed consistently moderate to high levels of on-topic responses. Further, there was no difference between the level of natural conversation generalisation to known and unknown adults. This may suggest the combination of text and auditory scripts assisted in generalising the skills.
In a systematic review, Sng, Carter, and Stephenson (Reference Sng, Carter and Stephenson2014) suggested that visual scripts could be effective in improving initiations and responses during conversations, but none of the studies reviewed used a combination of auditory and visual scripts or used an iPad for teaching. A comparison of the results in the present study with studies that used auditory scripts (Stevenson, Krantz, & McClannahan, Reference Stevenson, Krantz and McClannahan2000; Wichnick et al., Reference Wichnick, Vener, Pyrtek and Poulson2010) indicated similar increases in the number of scripted responses. High levels of unscripted or novel responses or interactions have been reported in previous studies, whereas the number of unscripted interactions for taught topics in this study was low. A possible reason for this may be the differences in the variables measured and the intervention itself. The studies by Stevenson et al. (Reference Stevenson, Krantz and McClannahan2000) and Wichnick et al. (Reference Wichnick, Vener, Pyrtek and Poulson2010) assessed treatment effect in natural conversations, thus increasing the probability of unscripted responses, whereas treatment effect in this study was assessed via the iPad, making it impossible to provide unscripted responses in intervention probes. Further, only taught scripts (not paraphrased scripts) were used for generalisation probes in the current study, which may also have contributed to the number of scripted responses. In retrospect, paraphrased scripts could have been used to assess for generalisation to taught topics in natural conversations. In addition, for a response to be considered unscripted in the current study, new or different information needed to be provided. Changes of wording or omissions were not considered unscripted responses. It should be noted, however, that there were a large number of appropriate responses to untaught topics in this study and these unscripted responses indicate that Kenny had learnt to provide novel responses towards the end of the study.
There are advantages to touch-screen devices like iPads and iPods in the delivery of teaching programs in classroom settings. They are more affordable than traditional types of assistive technology and have many built-in features that may facilitate learning (Kagohara et al., Reference Kagohara, van der Meer, Ramdoss, O'Reilly, Lancioni, Davis and Sigafoos2013). There is also evidence that individuals with developmental disabilities do not find it difficult to operate touch-screen devices and may prefer them to more traditional options. Although there are advantages to the use of these devices, there remains a need for more research to determine whether it is the device or the app that facilitates the acquisition of skills (Stephenson & Limbrick, Reference Stephenson and Limbrick2015). The intervention implemented in this study did not require close teacher supervision. This is certainly an advantage of this particular app. Once the scripts and prompts were programmed, the student was to activate and go through the training process independently. However, it should be noted that not all apps provide correctional feedback or prompts. Some rely on monitoring by the teacher.
Limitations
There are a number of limitations to this pilot study, which may inform future research. It is not possible to attribute the effects of this intervention solely to the iPad. The relative efficacy of delivery of the script by a teacher is unknown. The intervention in this study primarily taught on-topic responses to questions or requests for information. Other elements of conversation, such as initiating, repairing, asking questions, and topic shifts, were not addressed. There is evidence that an obligatory turn is more likely to be fulfilled than a non-obligatory one (Davis, Reichle, Southard, & Johnston, Reference Davis, Reichle, Southard and Johnston1998; Edmister & Wegner, Reference Edmister and Wegner2015; Santos & Kraft, Reference Santos and Kraft1997), therefore the focus of this pilot study was on obligatory responses to questions, as these are likely to be easier to teach.
Further Research
In this study we utilised a script presented solely on an iPad. It would be worthwhile to conduct further research comparing script training delivered on an iPad to scripts taught by a partner. Based on the promising results in this study, there is scope to expand the research into the role of iPads and other tablet devices in the training of more complex conversation skills such as non-obligatory interactions. Although the script was automated during the intervention sessions and the researcher did not provide any personal interaction during the intervention phase, the effect of the presence of another person cannot be discounted. Thus, it would be appropriate to examine the use of the app without the presence of a supervisor. Finally, alternative experimental designs should be considered in studies of this nature, given the issues experienced in this study with experimental control.
Conclusion
This appears to be the first study that has used the iPad to teach conversational skills. Although the results demonstrated potential in the use of the iPad device and the Conversation Coach app in a classroom setting, further research is needed to determine if individuals with a different profile to Kenny show similar gains in on-topic responses. As with any emergent technology where research is limited, teachers should carefully monitor progress in classroom-based applications.
Acknowledgements
Copies of the Conversation Coach software were provided free of charge to the researchers by the developer, Silver Lining Multimedia. There were no other conflicts of interest.