Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T10:49:51.539Z Has data issue: false hasContentIssue false

Team-based clinical simulation in radiation medicine: value to attitudes and perceptions of interprofessional collaboration

Published online by Cambridge University Press:  10 March 2015

Caitlin Gillan*
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada
Meredith Giuliani
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada
Olive Wong
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada
Nicole Harnett
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada
Emily Milne
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada
Douglas Moseley
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada Bahen Chant Radiation Treatment Centre, Stronach Regional Cancer Centre, Newmarket, Canada
Robert Thompson
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada
Pamela Catton
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada
Jean-Pierre Bissonnette
Affiliation:
Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Canada Department of Radiation Oncology, University of Toronto, Toronto, Canada
*
Correspondence to: Caitlin Gillan, 2B Radiation Therapy, Princess Margaret Cancer Centre, 610 University Ave, Toronto, Ontario Canada, M5G 2M9. E-mail: Caitlin.gillan@rmp.uhn.on.ca
Rights & Permissions [Opens in a new window]

Abstract

Introduction

Simulation has been effective for changing attitudes towards team-based competencies in many areas, but its role in teaching interprofessional collaboration (IPC) in radiation medicine (RM) is unknown. This study reports on feasibility and IPC outcomes of a team-based simulation event; ‘Radiation Medicine Simulation in Learning Interprofessional Collaborative Experience’ (RM SLICE).

Methods

Radiation therapy (RTT), medical physics (MP) and radiation oncology (RO) trainees in a single academic department were eligible. Scheduled closure of a modern RM clinic allowed rotation of five high-fidelity cases in three 105-minute timeslots. A pre/post-survey design evaluated learner satisfaction and interprofessional perceptions. Scales included the Readiness for Interprofessional Learning Scale (RIPLS), UWE Entry Level Interprofessional Questionnaire (UWEIQ), Trainee Test of Team Dynamics and Collaborative Behaviours Scale (CBS).

Results

Twenty-one trainees participated; six ROs (28·57%), six MPs (28·57%) and nine RTTs (42·86%). All cases were conducted, resolved and debriefed within the allotted time. Twenty-one complete sets (100%) of evaluations were returned. Participants reported limited interaction with other professional groups before RM SLICE. Perceptions of team functioning and value of team interaction in ‘establishing or improving the care plan’ were high for all cases, averaging 8·1/10 and 8·9/10. Average CBS scores were 70·4, 71·9 and 69·5, for the three cases, scores increasing between the first and second case for 13/21 (61·9%) participants. RIPLS and UWEIQ scores reflected positive perceptions both pre- and post-event, averaging 83·5 and 85·2 (RIPLS) and 60·6 and 55·7 (UWEIQ), respectively. For all professions for both scales, the average change in score reflected improved IP perceptions, with agreement between scales for 15/20 (75·0%) participants. Overall, perception of IPC averaged 9·14/10, as did the importance of holding such an event annually.

Conclusions

Team-based simulation is feasible in RM and appears to facilitate interprofessional competency-building in high-acuity clinical situations, reflecting positive perceptions of IPC.

Type
Original Articles
Copyright
© Cambridge University Press 2015 

Introduction

Interprofessional collaboration (IPC) is increasingly being recognized in medical trainee competency profiles, yet it is difficult to teach in the clinical environment.Reference Russell, Nyhof-Young, Abosh and Robinson1, Reference Oandasan and Reeves2 Teaching IPC is particularly challenging in radiation medicine (RM).Reference Gillan, Wiljer, Harnett, Briggs and Catton3 While RM is inherently interprofessional, IPC in RM is often disparate in time and location, with multiple decisions and handoffs occurring virtually through asynchronous electronic communications and discrete but interdependent tasks. The key professions involved in the delivery of RM care are: radiation oncologists (RO) who determine the need for, prescribe, and monitor radiation treatment; radiation therapists (RTT) who plan radiation dose distributions and deliver treatment; and medical physicists (MP) who develop technology and ensure its safe and accurate use.Reference White and Kane4 The importance of learning and practising collaboratively has been acknowledged,Reference White and Kane4Reference Grau, Muren, Hoyer, Lindegaard and Overgaard7 but strategies for fostering IPC in RM are lacking, as are objective measures of the impact on professionals, practice and patients.

Simulation has been effective for changing attitudes and behaviours towards team-based competencies in surgery, obstetrics and emergency medicine.Reference Paige, Kozmenko and Morgan8Reference Robertson, Schumacher, Gosman, Kanfer, Kelley and DeVita11 While the role of simulation in teaching IPC in RM is largely unknown, its potential to provide learning opportunities where direct engagement between professionals can be infrequent and ad hoc, is appealing.Reference Gillan, Wiljer, Harnett, Briggs and Catton3 By focusing on active participation of learners and interprofessional team-building, case-based simulation can harness the common goal of efficient and safe patient-centred care to address interprofessional competencies.Reference Ziv, Ben-David and Ziv12, Reference Ziv, Wolpe, Small and Glick13

In one of few published reports on interprofessional education (IPE) in RM, interactive IPE exercises were found to lead to common clinical terminology, better consideration of professional perspectives and knowledge, and enhanced communication.Reference Gillan, Wiljer, Harnett, Briggs and Catton3 These benefits constitute lower-level outcomes in Barr’s model of interprofessional learner outcomes.Reference Gillan, Wiljer, Harnett, Briggs and Catton3, Reference Barr14 Higher order outcomes not yet assessed in RM include behavioural and practice changes, and ultimately improved patient outcomes. With recent media attention,Reference Bogdanich15, Reference Bogdanich and Rebelo16 and study of RM practice and medical error,Reference Pawlicki, Dunscombe, Mundt and Scalliet17, Reference Bissonnette and Medlam18 the value of collaboration for safe and quality care of patients cannot be overemphasized. Evaluation must be undertaken using appropriate and comprehensive tools to assess the value to the learner, to practice and ultimately to the patient.Reference Gillan, Lovrics, Halpern, Wiljer and Harnett19

The purpose of this study was to report on learner satisfaction and on changes in interprofessional perceptions using a high-fidelity, interprofessional team-based simulation event, nicknamed ‘Radiation Medicine Simulation in Learning Interprofessional Collaborative Experience’ (RM SLICE).

Methods

Study population and recruitment

Trainees enrolled in RM programmes within a single academic department were eligible to participate. These represented RTT students and MP and RO residents, placed at affiliated clinical sites. All RO and MP trainees were considered for inclusion, regardless of their seniority within their respective programmes. Due to restrictions in their clinical curriculum, only RTTs in their final (third) year were invited to participate.

All eligible trainees (n=67), were approached by email 2 weeks before the RM SLICE event. Those interested in participating were registered. Written consent was sought at the time of the event. Trainees were assigned to one of four interprofessional groups, with attention paid to a balance of profession, clinical site affiliation, and year of study. Approval from the local Research Ethics Board was sought before recruitment.

Event planning and delivery

A scheduled clinic closure of a modern RM department provided a high-fidelity environment in which to run simulations. An event schedule was developed to accommodate three 105 min timeslots, rotating teams through a series of three of five clinical case scenarios. Additional time was allotted for orientation, breaks, evaluation and debriefs.

The interprofessional research team, consisting of RTT, RO and MP educators, developed case scenarios that reflected unique situations that would ideally require engagement of all three professions. Two of these incorporated standardized patients, which are healthy persons trained to portray patients’ physical and emotional characteristics, according to a planned scenario. For the purposes of this event, they were hired from the affiliated university’s Standardized Patient Programme.

Each group was assigned a group facilitator, who was either a staff RO or staff RTT from an affiliated clinical site. Simulations were run by the case facilitator who developed the case. At the start of each case, a summary of relevant background information was presented for group discussion, and teams would engage, as necessary, with the case’s patient, technological infrastructure and information. Upon resolution of the case, a debrief would be conducted with the case facilitator, and research evaluations (not performance evaluations) would be completed.

Evaluation

A pre/post-survey design was used for evaluation, with participant pre-event, post-case and post-event surveys. Items included demographic, feasibility and satisfaction domains, and existing validated interprofessional scales. All evaluations were paper-based and anonymous, with a simple coding system used to match submissions.

Four complementary IPE instruments were used for evaluation. These were selected for validity, reliability and the ability to assess the lower levels of Barr/Kirkpatrick’s hierarchy of learner outcomes (satisfaction with IPE and change in perception of interprofessionalism):Reference Barr14, Reference Gillan, Lovrics, Halpern, Wiljer and Harnett19

RIPLS and UWEIQ focused on perceived value of IPE and IPC, and were implemented as part of the pre- and post-event surveys. RIPLS is a 19-item, four subscale instrument assessed on a five-point Likert scale, with a higher score reflecting more positive attitudes and perceptions relating to teamwork, professional identity, and roles and responsibilities (Cronbach’s α=0·87Reference King, Greidanus and Major24).Reference Parsell and Bligh20 Subscales addressed teamwork and collaboration, negative professional identity, positive professional identity, and roles and responsibilities. Published literature has shown a score of 71·5 to be average for baccalaureate level health professional trainees. As per discussion of the potential value of reverse scoring two items (‘I am not sure what my professional role will be/is’, and ‘I have to acquire much more knowledge and skill than other trainees in my own department’),Reference King, Greidanus and Major24 this modification was made in RIPLS scoring in this investigation. UWEIQ is a 27-item instrument with 3 subscales assessed on a four- or five-point Likert scale (Cronach’s α=0·71).Reference Pollard, Miers and Gilchrist21 The Likert scale is reversed as compared to the RIPLS scale, with lower scores reflecting better communication and teamwork, IP learning, and IP interactions, as per the three subscales. Scores of 9–20, 9–22 and 9–22 for respective subscales would indicate positive attitudes.

TTTD and CBS scales measured perceptions of team functioning and impact on case outcome, based on a given situation. These were incorporated into post-case surveys. TTTD combines qualitative and quantitative measures, assessed against gold standard responses.Reference Hyer, Skinner and Kane22 For the purposes of this investigation, only quantitative items, assessed on a ten-point Likert scale, were employed. As a whole, the instrument is designed to assess trainees’ ability to recognize and respond to team behaviours and to meet a patient’s needs. CBS is a 20-item, four-point Likert scale instrument (Cronbach’s α=0·76–0·97Reference King and Lee25, Reference Baggs and Ryan26), used to assess collaborative behaviours with respect to an identified situation. A higher total score would reflect a more collaborative relationship, with a cut-off score of 50 below which a lack of collaboration is considered to exist.Reference Stichler23, Reference Abdelkader, Al-Hussami, Al barmawi, Saleh and Shath27

Wording of some items within the scales was modified to suit the RM environment and context of the event.

Analysis

Data were reported using descriptive statistics. Outcome measures to determine feasibility were determined a priori. These included that the event be run with 16–28 trainees (four to seven per group), that all scenarios be completed and resolved within the 105-minute allotted time, that all evaluations be completed, and that participants be satisfied with the event. Satisfaction was determined to be an average score of >7/10 on evaluation items relating to perceived value of the event.

Results

Participation and feasibility

Twenty-one trainees participated, with six ROs (28·57%), six MPs (28·57%) and nine RTTs (42·86%), representing five clinical sites. All RTTs were in their final year of their undergraduate programme and four MPs (66·7%) were in their final year of residency. Of the ROs, three (50%) were in their third year of residency, with others being more junior. Fifteen trainees were male (71·4%) and six were female (28·6%). This represented a slightly higher representation of male trainees as compared with the broader trainee population, primarily in the RTT group.

All cases were conducted within the 105-minute allotted time. In each instance, a decision or resolution was achieved by the group within that time, and a debrief was held with the facilitators. In the cases where a standardized patient was involved, the patient participated in the debrief. Twenty-one pre-event and post-event evaluations were returned (100%), as well as 21 post-case evaluations (100%) for each case. Insight regarding the development of the event, including resource requirements and programmatic outputs, is reported elsewhere.Reference Giuliani, Gillan and Wong28

Pre-event perceptions

Participants perceived themselves as having limited interaction with trainees and staff of the other professional groups before RM SLICE. Half of the ROs (3/6) and MPs (3/6) believed they interacted frequently with RTT staff, and there was some interaction with RO staff and trainees reported by MPs and RTTs (see Table 1). Reported instances of frequent interaction were lowest with RTT trainees and with MPs (both trainees and staff).

Table 1 Pre-event participant perception of level of interaction with staff and trainees of other disciplines (%(n) of those perceiving they have ‘frequent’ interactions (≥4/5) with that group)

Abbreviations: RO, radiation oncologist; RTT, radiation therapist; MP, medical physicists.

In ranking the factors that contributed to the desire to participate in the event, the most important factor was the ‘opportunity to engage with other radiation medicine professions’, followed by ‘not wanting to miss an available learning opportunity’, and ‘opportunity to be exposed to novel clinical scenarios’. The other six options in this survey item related to value to one’s curriculum vitae, avoidance of regular clinical responsibilities, curiosity, seeking exposure to specific scenarios, and opportunity to demonstrate competence to those in a position of authority.

Case perceptions and evaluation

Tables 2 and 3 summarize elements from the TTTD. The perception of team functioning averaged 8·1/10 across all participants for all cases (Table 2), and was lowest for Case D (7·7/10, n=15), which represented the most emergent situation, involving a radioactive source stuck in the cervix of a woman with endometrial cancer (Table 2a). When sorted by group over the course of the day (Table 2b), it is noted that for all four groups, the highest average team functioning score was achieved in the second case of the day, independent of the simulated scenario for each group.

Table 2 Average TTTD perceived team functioning by case and by group over the day, as a rating on a scale of 1 (highly ineffective) to 10 (highly effective)

Abbreviations: TTTD, Trainee Test of Team Dynamics; 4DCT, four-dimensional computed tomography; CBCT, cone-beam computed tomography; CMU, clinical mark-up.

Table 3 Average TTTD perceived value of team interaction by case and by group over the day, as a ranking on a scale of 1 (not at all valuable) to 10 (highly valuable)

Abbreviations: TTTD, Trainee Test of Team Dynamics; 4DCT, four-dimensional computed tomography; CBCT, cone-beam computed tomography; CMU, clinical mark-up.

The value of the team interaction in ‘establishing or improving the care plan’ was high across all participants for all cases, averaging 8·9/10 (Table 3). The perception was highest in Cases B and C (Table 3a), which represented the clinical delineation and calculation of a treatment plan for an infant patient and for a scalp treatment, respectively. There was no distinct trend observed between groups or times of day (Table 3b).

For the CBS scale, administered after each case (Table 4), the average scores were 70·4, 71·9 and 69·5, for the first, second and third cases of the day, respectively. Scores increased between the first and second case of the day for 13/21 (61·9%) of participants. As observed in the perceptions of team functioning from the TTTD, the highest scores tended to be observed in the second case of the day.

Table 4 Average CBS scores by case and by group over the day

Abbreviations: CBS, Collaborative Behaviours Scale; 4DCT, four-dimensional computed tomography; CBCT, cone-beam computed tomography; CMU, clinical mark-up.

Post-event perceptions

Satisfaction

Twenty-one post-event surveys were collected from participants (100%). When asked about the perceived value of the event to a number of areas, Interprofessional Communication scored highest for all professional groups (average 9·14/10). All other areas also scored high, ranging from 8·86/10–9·14/10 for Clinical Knowledge, Clinical Decision-Making, Clinical Skills, and Exposure to other trainees within my program. Individual feedback related to the value of ‘mak[ing] clinical decisions with different professions’ (RTT), ‘problem-solving… [and] finding limitations in my professional role’ (RTT), and being immersed in the simulated environment which provided unique opportunity for ‘emergency situations’ (RTT, RO), that reflected ‘real life’ (MP, RO) and were of ‘high fidelity’ (MP, RO).

The importance of holding such an event on an annual basis was rated as 9·14/10.

Change in perceptions

Average scores for RIPLS and UWEIQ scales are reported in Table 5. RIPLS scores were high both pre- and post-event, averaging 83·5 and 85·2, respectively. Similarly, scores were low for UWEIQ, averaging 60·6 and 55·7, respectively. For all professional groups for both scales, the average change in score reflected improved IP perceptions.

Table 5 Average pre- and post-event scores on initial RIPLS and UWE scales, and changes in score for subsequent administrations

Notes:

a Subscales: RIPLS – teamwork and collaboration, negative professional identity, positive professional identity, roles and responsibilities.

b n=5 (missing data omitted).

c n=20 (missing data omitted).

UWEIQ: Communication and Teamwork, Interprofessional Learning, Interprofessional Interactions.

Abbreviations: RIPLS, Readiness for Interprofessional Learning Scale; UWEIQ, UWE Entry Level Interprofessional Questionnaire; RTT, radiation therapist; RO, radiation oncologist; MP, medical physicists.

The change in RIPLS scores between the start and end of RM SLICE was positive (reflecting improved IP perceptions) for 14/21 (66·7%) participants, while the change in UWEIQ score was negative (also reflecting improved IP perceptions) for 13/20 (65·0%) participants. There was agreement between scales for 15/20 (75·0%) participants, with 12/20 of these reflecting improved perceptions (increase in RIPLS score, decrease in UWEIQ score). For the remaining participants, scores either reflected disagreement between scales (3/20, 15·0%), or no change in RIPLS (2/20, 10·0%).

Discussion

RM SLICE was the first demonstration of a feasible experience that allowed learners to navigate the interprofessional interactions, tasks, and decisions necessary to deliver quality RM care, through exploration of a series of high-fidelity simulations. This type of team simulation event has never been reported with RM trainees, and satisfaction and perception data suggest that participants found it valuable. Evaluation of learner outcomes can contribute to the future evolution of this simulation initiative, thus informing the way collaboration and teamwork are taught in RM.

It was noted in this investigation that instances of pre-certification IPE and IPC are lacking a clinical environment, especially between the trainees themselves. The opportunity for trainees to engage with one another was thus seen to be important and was recognized as having been offered through RM SLICE. The high-fidelity scenarios developed for the event were valued for the conduciveness to collaborative learning and exposure to situations not often experienced in the course of training. Consistent high scores across professional groups for all elements of RM SLICE suggest that learners were satisfied with the experience, and thus that Level I of Barr/Kirkpatrick’s hierarchy of learner outcomes was achieved.Reference Barr14

The high pre-event scores observed in validated IPE scales suggest that despite infrequent IP interactions, participants held positive attitudes and perceptions relating to IPC. While not reported for residency programmes, previously published average ‘research-intensive university’ (e.g., medicine, nursing, occupational therapy) student scores for RIPLS of 70·2 (out of a possible 95) are lower than the values reported here (83·5 for pre-event administration, and 85·2 for post-event administration). Similarly, scores for most aspects of UWEIQ in this investigation fell within the range reflecting positive attitudes towards IPC. The exception was the Interprofessional Interactions subscale, where averages for all professional groups fell within the neutral range in both pre- and post-event administrations. A study by Ruebling et al.,Reference Ruebling, Pole and Breitbach29 evaluating a semester-long introductory IPE course among undergraduate health professionals students reported only on the second UWEIQ subscale (Interprofessional Learning). These scores of 16·8 (pre-course) and 15·9 (post-course), were slightly less indicative of positive attitudes as compared with the RM SLICE scores of 15·1 and 13·9, and reflected a change of similar magnitude despite RM SLICE being only a single day intervention.

The agreement observed between RIPLS and UWEIQ was also noted by Ruebling et al., who concluded that the ‘UWEIP-Learning Scale had a strong, positive correlation with the RIPLS. This unintended outcome provides evidence for the convergent validity of the two instruments’.Reference Ruebling, Pole and Breitbach29 Scores here suggest positive attitudes and perceptions regarding IPC between RM professionals in their clinical environment, and the small increases seen after RM SLICE should be investigated further for the value in further attaining Barr/Kirkpatrick’s Level II learner outcomes. Constructs evaluated in the UWEIQ Interprofessional Interactions subscale, which includes professional stereotypes and perceptions of professional hierarchies, and which exhibited slightly poorer scores here, should also be explored further within this population.

Case-by-case evaluations in this investigation suggested that cases were well balanced and that participants noted the exhibition and benefit of IPC behaviours in each. Based on the CBS, items included from the TTTD, and individual comments, evaluations reflected a pooling together of professional resources to determine and execute a course of action in a given simulation scenario. Use of the CBS in the literature among practising nurse practitioners and physician assistants found average scores of 68·3 and 70·7, respectively,Reference Beisel30 which are similar to results reported here. The observation of a slight trend towards the second case being the most collaboratively performed of the day warrants further investigation to determine any significance. Anecdotal comments about the length of the event might suggest that participants were fatigued by the third case, which may have impacted collaboration and perceptions of teamwork.

Comments that noted the novelty of these scenarios further evidences the complementary role of team-based simulation in clinical training programmes, in that it provided access to clinical situations not experienced in the course of training. Exploring collaboration in practice can both demonstrate the interdependency of individual professional knowledge and skills and encourage modeling of effective clinical behaviours. The importance of IPC in RM practice has been touted in the literature,Reference Ziv, Wolpe, Small and Glick13, Reference Abdelkader, Al-Hussami, Al barmawi, Saleh and Shath27, Reference Dawson and Jaffray31 but never explored in formal, dedicated pre-licensure training initiatives. Given the success of RM SLICE in achieving Level I and II learner outcomes, follow-up evaluations can provide subjective evidence of higher level outcomes such as changes in IP behaviours and practice. While difficult to assess objectively, future research involving RM SLICE will also aim to evaluate the benefits of IPC to the patient undergoing RTT.

One of the major limitations of this study was how research-intensive it was, and its reliance on a clinic closure. While feasible in the context of an academic research investigation, an analysis of what might be required to run such an event in a non-research context is discussed by Giuliani et al.Reference Giuliani, Gillan and Wong28 Other limitations to this study were the broad range of training year of the participants, and thus the variability in clinical experience. Future efforts will attempt to determine the most appropriate time in training to hold such an event.

Conclusion

Team-based simulation is feasible in RM and appears to facilitate interprofessional competency-building in high-acuity clinical situations, thereby increasing trainee readiness for interprofessional practice. RM trainees exhibit positive attitudes and perceptions of IPC, and value the opportunity to further explore these through collaborative simulation scenarios. RM SLICE achieved Level I and Level II learner outcomes as shown by feasibility assessment and existing IP instruments. Further investigation is warranted to determine any impact on professional behaviours or practice.

Acknowledgements

The authors appreciate the time and expertise of the radiation therapists, medical physicists, radiation oncologists, and information technology personnel who volunteered their time to facilitate this event in June 2012. They would also like to thank the Stronach Regional Cancer Centre at the Southlake Regional Health Centre for offering its radiation therapy department for the day.

References

1.Russell, L, Nyhof-Young, J, Abosh, B, Robinson, S. An exploratory analysis of an interprofessional learning environment in two hospital clinical teaching units. J Interprof Care 2006; 20 (1): 2939.CrossRefGoogle ScholarPubMed
2.Oandasan, I, Reeves, S. Key elements of interprofessional education. Part 2: factors, processes and outcomes. J Interprof Care 2005; 19 (suppl 1): 3948.CrossRefGoogle ScholarPubMed
3.Gillan, C, Wiljer, D, Harnett, N, Briggs, K, Catton, P. Changing stress while stressing change: the role of interprofessional education in mediating stress in the introduction of a transformative technology. J Interprof Care 2010; 24 (6): 710721.Google Scholar
4.White, E, Kane, G. Radiation medicine practice in the image-guided radiation therapy era: new roles and new opportunities. Semin Radiat Oncol 2007; 17 (4): 298305.Google Scholar
5.Keller, H, Jaffray, D A, Rosewall, T, White, E. Efficient on-line setup correction strategies using plan-intent functions. Med Phys 2006; 33 (5): 13881397.Google Scholar
6.Bell, L J, Oliver, L, Vial, Pet al. Implementation of an image-guided radiation therapy program: lessons learnt and future challenges. J Med Imaging Radiat Oncol 2010; 54 (1): 8289.Google Scholar
7.Grau, C, Muren, L P, Hoyer, M, Lindegaard, J, Overgaard, J. Image-guided adaptive radiotherapy – integration of biology and technology to improve clinical outcome. Acta Oncol 2008; 47 (7): 11821185.Google Scholar
8.Paige, J, Kozmenko, V, Morgan, Bet al. From the flight deck to the operating room: an initial pilot study of the feasibility and potential impact of true interdisciplinary team training using high-fidelity simulation. J Surg Educ 2007; 64 (6): 369377.CrossRefGoogle Scholar
9.Paige, J T, Kozmenko, V, Yang, Tet al. Attitudinal changes resulting from repetitive training of operating room personnel using of high-fidelity simulation at the point of care. Am Surg 2009; 75 (7): 584590; discussion 90–91.Google Scholar
10.Paige, J T, Kozmenko, V, Yang, Tet al. High-fidelity, simulation-based, interdisciplinary operating room team training at the point of care. Surgery 2009; 145 (2): 138146.Google Scholar
11.Robertson, B, Schumacher, L, Gosman, G, Kanfer, R, Kelley, M, DeVita, M. Simulation-based crisis team training for multidisciplinary obstetric providers. Simul Healthc 2009; 4 (2): 7783.CrossRefGoogle ScholarPubMed
12.Ziv, A, Ben-David, S, Ziv, M. Simulation based medical education: an opportunity to learn from errors. Med Teach 2005; 27 (3): 193199.Google Scholar
13.Ziv, A, Wolpe, P R, Small, S D, Glick, S. Simulation-based medical education: an ethical imperative. Simul Healthc 2006; 1 (4): 252256.CrossRefGoogle ScholarPubMed
14.Barr, H. Evaluation, evidence and effectiveness. J Interprof Care 2005; 19 (6): 535536.Google Scholar
15.Bogdanich, W. Radiation offers new cures, and ways to do harm. The New York Times, 23rd January 2010. http://www.nytimes.com/2010/01/24/health/24radiation.html?pagewanted=all&_r=0Google Scholar
16.Bogdanich, W, Rebelo, K. A pinpoint beam strays invisibly, harming instead of healing. The New York Times, 29th December 2010. http://www.nytimes.com/2010/12/29/health/29radiation.html?pagewanted=allGoogle Scholar
17.Pawlicki, T, Dunscombe, P B, Mundt, A J, Scalliet, P. (eds). Quality and Safety in Radiotherapy. Boca Raton, FL: Taylor & Francis; 2011. 621pp.Google Scholar
18.Bissonnette, J P, Medlam, G. Trend analysis of radiation therapy incidents over seven years. Radiother Oncol 2010; 96 (1): 139144.Google Scholar
19.Gillan, C, Lovrics, E, Halpern, E, Wiljer, D, Harnett, N. The evaluation of learner outcomes in interprofessional continuing education: a literature review and an analysis of survey instruments. Med Teach 2011; 33 (9): e461e470.CrossRefGoogle Scholar
20.Parsell, G, Bligh, J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Medical Education 1999; 33 (2): 95100.Google Scholar
21.Pollard, K C, Miers, M E, Gilchrist, M. Collaborative learning for collaborative working? Initial findings from a longitudinal study of health and social care students. Health & Social Care in The Community 2004; 12 (4): 346358.CrossRefGoogle ScholarPubMed
22.Hyer, K, Skinner, J, Kane, Ret al. Using scripted video to assess interdisciplinary team effectiveness training outcomes. Gerontol Geriatr Educ 2003; 24 (2): 7591.Google Scholar
23.Stichler, J F. Development and Psychometric Testing of a Collaborative Behavior Scale. San Diego, CA: University of San Diego, 1989.Google Scholar
24.King, S, Greidanus, E, Major, Ret al. A cross-institutional examination of readiness for interprofessional learning. J Interprof Care 2012; 26 (2): 108114.Google Scholar
25.King, L, Lee, J L. Perceptions of collaborative practice between Navy nurses and physicians in the ICU setting. Am J Crit Care 1994; 3 (5): 331336.Google Scholar
26.Baggs, J, Ryan, S. ICU nurse-physician collaboration & nursing satisfaction. Nurs Econ 1990; 8 (6): 386392.Google Scholar
27.Abdelkader, R, Al-Hussami, M, Al barmawi, M, Saleh, A, Shath, T. Perception of academic nursing staff toward shared governance. Journal of Nursing Education and Practice 2012; 2 (3): 4653.Google Scholar
28.Giuliani, M, Gillan, C, Wong, Oet al. Evaluation of high-fidelity simulation training in radiation oncology using an outcomes logic model. Radiat Oncol 2014; 9: 189196.Google Scholar
29.Ruebling, I, Pole, D, Breitbach, A Pet al. A comparison of student attitudes and perceptions before and after an introductory interprofessional education experience. J Interprof Care 2013; 1: 2327. [Epub 2013/09/05].Google Scholar
30.Beisel, M. What’s happening?: Alaska nurse practitioners’ and physician assistants’ perceptions of the collaboration process. J Am Acad Nurse Prac 2007; 10 (11): 509514.Google Scholar
31.Dawson, L A, Jaffray, D A. Advances in image-guided radiation therapy. J Clin Oncol 2007; 25 (8): 938946.Google Scholar
Figure 0

Table 1 Pre-event participant perception of level of interaction with staff and trainees of other disciplines (%(n) of those perceiving they have ‘frequent’ interactions (≥4/5) with that group)

Figure 1

Table 2 Average TTTD perceived team functioning by case and by group over the day, as a rating on a scale of 1 (highly ineffective) to 10 (highly effective)

Figure 2

Table 3 Average TTTD perceived value of team interaction by case and by group over the day, as a ranking on a scale of 1 (not at all valuable) to 10 (highly valuable)

Figure 3

Table 4 Average CBS scores by case and by group over the day

Figure 4

Table 5 Average pre- and post-event scores on initial RIPLS and UWE scales, and changes in score for subsequent administrations