Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T00:55:55.502Z Has data issue: false hasContentIssue false

The role of radiotherapy patients in provision of student interpersonal skills feedback

Published online by Cambridge University Press:  09 July 2013

P. Bridge*
Affiliation:
School of Clinical Sciences, Queensland University of Technology, Brisbane, QLD, Australia
C. Pirihi
Affiliation:
Premion, John Flynn Private Hospital, Tugun, QLD, Australia
M. Carmichael
Affiliation:
School of Clinical Sciences, Queensland University of Technology, Brisbane, QLD, Australia
*
Correspondence to: Pete Bridge, Radiation Therapy Course Coordinator, Queensland University of Technology, Brisbane, QLD 4001, Australia. Tel: 0061731382273. E-mail: pete.bridge@qut.edu.au
Rights & Permissions [Opens in a new window]

Abstract

Background

At Queensland University of Technology, student radiation therapists receive regular feedback from clinical staff relating to clinical interpersonal skills. Although this is of great value, there is anecdotal evidence that students communicate differently with patients when under observation.

Purpose

The aim of this pilot was to counter this perceived observer effect by allowing patients to provide students with additional feedback.

Materials and methods

Radiotherapy patients from two departments were provided with anonymous feedback forms relating to aspects of student interpersonal skills. Clinical assessors, mentors and students were also provided with feedback forms, including questions about the role of patient feedback. Patient perceptions of student performance were correlated with staff feedback and assessment scores.

Results

Results indicated that the feedback was valued by both students and patients. Students reported that the additional dimension focused them on communication, set goals for development and increased motivation. These changes derived from both feedback and study participation, suggesting that the questionnaires could be a useful teaching tool. Patients scored more generously than mentors, although there was agreement in relative grading.

Conclusions

The anonymous questionnaire is a convenient and valuable method of gathering patient feedback on students. Future iterations will determine the optimum timing for this method of feedback.

Type
Original Articles
Copyright
Copyright © Cambridge University Press 2013 

Introduction

‘If a tree falls in a forest and no one is around to hear it, does it make a sound?’ The old philosophical question is easily applied to unobserved student–patient interactions. Anecdotal evidence strongly suggests that students do communicate with patients when they are unobserved and that this can be in a more clinically useful manner. Patients can be much friendlier with an apparently uncommunicative student than expected. This is frequently evidenced when patients appear to be familiar with personal aspects of students that can only have emerged in conversation.

Radiotherapy students undertake clinical placements where they ask and advise patients about side effects and psychosocial issues. Students frequently call patients from the waiting area and they are inevitably the first point of contact for many patients. Important information can be gleaned as the patient is escorted into the treatment room, provided students have good interpersonal skills. From a training perspective, some aspects of these interpersonal skills have proved difficult to assess. One challenge, in particular, is the observer effect with anecdotal evidence from students suggesting that they are often happy to converse with patients until a senior staff member arrives at which point they stop.

The obvious solution to this perceived gap is to engage the patient in feedback provision. Patients have successfully played a role in clinical education in disciplines including medicine,Reference Winterbottom, Jha and Melville1Reference Barr, Ogden and Rooney9 nursing,Reference Stockhausen10, Reference Schlegel, Woermann, Shaha, Rethans and van der Vleuten11 podiatry,Reference Davies and Lunn12 mental healthReference Shawler13 and inter-professional scenarios.Reference Marken, Zimmerman, Kennedy, Schremmer and Smith14 A systematic reviewReference Jha, Quinton, Bekker and Roberts15 revealed that patients were mainly involved in course design, teaching and assessment of clinical skills. Gathering a consensus of what constitutes ‘patient involvement’ is fraught with controversy and different terminology use; however, patients who contribute to health professional training are invariably trained or ‘expert’, commonly referred to as ‘educationally engaged’Reference Barr, Ogden and Rooney9 or ‘standardised’.Reference Park, Son, Kim and May3, Reference Shawler13 A subsequent review of patient involvementReference Towle, Bainbridge and Godolphin16 developed a useful taxonomy of involvement and suggested that increased autonomy of patient input has increased training requirements. However, untrained patients can input via anonymous questionnaires, with considerable autonomy. Findings from a large-scale survey after consultation with fifth year medical studentsReference Haffling and Håkansson17 concluded that unprepared patients could successfully provide professional skills feedback. Despite this, several authorsReference Muir and Laxton5, Reference Davies and Lunn12, Reference Stickley, Stacey, Pollock, Smith, Betinis and Fairbank18, Reference Dearnley, Coulby, Rhodes, Taylor and Coates19 have identified a lack of evidence supporting patient involvement in assessment. Some of the most promising work in this field used unprepared patients to provide feedback via a Likert-style questionnaire.Reference Fadhilah, Oda and Emura7, Reference Braend, Gran, Frich and Lindbaek8, Reference Mercer, Tanabe and Pang20, Reference Reinders, Blankenstein, Knol, de Vet H and van Marwijk21 A 2008 paperReference Davies and Lunn12 reported using a visual analogue scale to gather formative scores on podiatry students’ communication skills from unprepared patient volunteers. Interestingly, these results indicated that students rated communication skills higher after these clinical placements than at the start. What these studies share is an emphasis on single consultations rather than ongoing professional as are common in radiotherapy.

There is little evidence of patient feedback provision to radiotherapy students, although there is some evidence that expert patients have been used in formal education.Reference Williamson, Pope, Mundy and White22 A potential advantage to the students is that patients are ideally placed to provide feedback on unobserved interpersonal skills. Currently, students receive regular feedback and a grade from mentors on the basis of their experience of working with students. Although this is of enormous value to professional development, it is obvious that some of this feedback is provided by proxy on behalf of the patient. Empowering patients to provide their views should provide a valuable source of additional evidence that the students can use within their reflective portfolios.

Patient involvement in education is strongly encouraged by numerous sources including the United Kingdom Postgraduate Medical Education and Training Board,23 including all categories identified by Towle et al.Reference Towle, Bainbridge and Godolphin16 Clear benefits include the opportunity for multi-source feedbackReference Braend, Gran, Frich and Lindbaek8; necessary for facilitating rich reflection. It has also been suggestedReference Muir and Laxton5 that this level of contact and input from patients can build stronger patient-centred professional attitudes in the developing student. In addition, evidence strongly indicates that patients relish opportunities to contribute to student training.Reference Towle, Bainbridge and Godolphin16, Reference Haffling and Håkansson17, Reference Howe and Anderson24 Indeed, many studies reported patients feeling empowered when their feedback is sought.Reference Howe and Anderson24Reference Frisby26

Aims

This pilot study aimed to determine the value of anonymous patient feedback for students, gain patient perspectives on the process and appraise the feasibility of data collection. The research questions underpinning the evaluation were as follows.

Primary

  • Can patients provide useful assessment of radiotherapy student interpersonal skills?

Secondary

  • To what extent does patient feedback match that of the clinical staff?

  • Do students find patient feedback to be of value?

  • Do patients wish to provide anonymous feedback on student interpersonal skills?

  • Do patients wish to provide more detailed feedback?

Methods

A short questionnaire was used to collect patient feedback on student interpersonal skills using five-point Likert scale prompts about aspects of interpersonal skills as well as an overall rating. Two further questions canvassed opinion about the feedback process. Questionnaires were given to patients on day 2 of their treatment along with participant information, and students were discouraged from mentioning the study again. Patients were free to complete questionnaires if they wished, when they chose, and post it in one of the sealed boxes around the department. On completion of placement, responses were collated, summarised and passed to the individual students for use in their reflective portfolio and action plan. Staff opinion of student skills was assessed with a similar questionnaire for correlation purposes. Student opinions of the initiative were harvested using an anonymous online survey. Although not specifically developed for this study, students also received formal feedback and grades via the Australian Universities Radiation Therapy Student Clinical Assessment Form (AURTSCAF).Reference Giles, Chiswell, Wright, Bridge and Charlton27

All patients attending for treatment, who were over the age of 18 with no reported mental incapacity, were eligible for participation; however, members of the treatment team excluded patients who were undergoing short-course palliative treatments, were severely ill or had psychological issues. During the project, seven students attended the pilot sites with three undertaking year 2 and four attending their first year 1 placement.

Ethical issues

Ethical approval to conduct this study was obtained from the local clinical research committee and Queensland University of Technology's Human Research Ethics Committee. Clearly, patients needed to be protected and several measures were in place to ensure that patient anonymity was maintained at all costs. All patients were provided with an assessment form and information sheet and advised that there was no obligation to participate. Patients completed the form in a tick-box format and posted it in a sealed box at reception. Thus, it was impossible to determine which patients had provided feedback and handwriting, or specific incidents, which could not be used to identify individuals. Students were protected from potentially upsetting comments with all patient feedback collated, vetted and summarised by a clinical educator before aggregate findings were released to the student.

Statistics

As a mixed methods approach was taken, a range of analysis methods were applied to the relevant data. The underlying strategy used concurrent triangulation between quantitative findings and qualitative comments. Quantitative data from the various ‘scoring’ tools were initially subjected to summary and descriptive statistical tools. Given the ordinal nature of the data, and the expected monotonic relationship, Spearman's rank order correlation coefficients were calculated using SPSS 21.0 (SPSS Inc., Chicago, IL, USA). The limited qualitative data were insufficient in volume to warrant use of sophisticated statistical analysis software, and thematic analysis was performed manually to provide both triangulation and formulate context-specific themes.

Results

Patient response rates were determined by comparing the number of forms handed out with the number of completed responses. Overall, 276 forms were distributed to patients and 183 (66%) were returned. Individual response rates for different students ranged from 58·5% to 80·4%. Students were discouraged from prompting patients to complete questionnaires, and therefore it is not known why there was such a range of response rates. The staff response rate was similar (66·1%) with 39 out of 59 forms being returned. Participants were asked whether they thought this was a useful initiative, and Table 1 shows that over 90% of the 180 patients who responded to this question thought that patients had a role to play in feedback provision. It should be remembered, however, that there was only a 66% response rate and it could be reasonably assumed that patients who thought they should not help provide feedback simply did not complete a form. This still equates to at least 60% of patients, although it is possible that some patients forgot to submit a questionnaire or lost it. Interestingly, clinical radiation therapists had a similar level of response, with nearly 63% of respondents seeing a role for patients.

Table 1 Perceptions of feedback

Patients were also asked whether they had wished to provide more detailed feedback than a tick-box Likert-style questionnaire. Not as many patients answered this question, and there was a spread of responses (as seen in Table 1), with only 40% wishing to and 5·3% of the patients stating a categorical ‘Not at all’. These findings suggest that the anonymous survey style of feedback is appropriate and that more detailed feedback is probably not worth the inherent difficulties.

Patient feedback was gathered in relation to five aspects of interpersonal skills: introductions, friendliness, team working, professionalism and patient well-being assessment. Patients unsurprisingly scored the students higher than the staff did. Over 83% of patients stated that students had ‘Definitely’ performed the criteria, whereas only 32% of staff agreed, with almost 50% of staff opting for ‘Mostly’. The implications and reasons for this are discussed later. Aspects of communication that patients would have had potentially more knowledge of included students introducing themselves and being friendly to patients. There was a discrepancy, with 99·5% of patients reporting the students to be friendly to them and only 81·1% of staff agreeing. The overall rating of student interpersonal skills also demonstrated the difference in scoring levels as seen in Table 2.

Table 2 Overall ratings of student interpersonal skills

Despite the differences in grading, when individual student feedback responses were analysed, it was apparent that there was underlying agreement of comparative ‘scores’. Grades on the AURTSCAF were compared with staff and patient responses to determine any differences in opinion. It should be noted, however, that staff response rates were low for each individual student, making analysis less reliable. Likert responses were converted into grades by matching the comments to the AURTSCAF five-point grading system. These were then used to perform correlation analysis (as shown in Table 3).

Table 3 Correlation analysis for the three scoring methods

Students were asked their opinions on the value of the patient feedback. There were five responses and four out of five students agreed that patients should help provide feedback. The other response was neutral. Interestingly, four out of five students agreed that participation had made them change how they communicated during the study. This suggests that the tool has a wider role to play as a teaching method, in addition to feedback provision. Most of the students provided expanded qualitative comments relating to a variety of themes including: Staff and patient feedback, Value of feedback and Impact of feedback, which will be discussed later.

Discussion

There are a number of potential sources of bias associated with this study and the results should be interpreted with appropriate caution. Although all patients and staff were supplied with feedback forms, the 66% response rate suggests participation bias. From a patient response perspective, this may manifest itself in a bias towards extreme responses. In addition, not all participants completed the forms in their entirety, skewing some responses.

Another issue that was evident in this study is the ‘ceiling’ effect, a tendency for extreme positive responses,Reference Andrew, Salamonson, Everett, Halcomb and Davidson28 which is common in other patient feedback studies.Reference Campbell, Lockyer, Laidlaw and Macleod29, Reference Makoul, Krupat and Chang30 Anonymity measures should ameliorate sources of response bias but not eliminate the ‘halo effect’,Reference Lie, Encinas, Stephens and Prislin31 whereby patients desire to reward students with glowing feedback. This would certainly explain why patients’ responses were generally more favourable. Another possibility is that staff members expect higher standards. Some authorsReference Andrew, Salamonson, Everett, Halcomb and Davidson28 have suggested that mixed methods feedback and harvesting of qualitative responses could provide a more balanced response; however, the gathering and analysing of the required quantities of anonymous data would be prohibitive.

Finally, one of the most significant findings in relation to bias is the presence of an observer effect, with students making more of an effort with the patients once they knew they could provide feedback. This was evident from the qualitative comments and triangulated well with the student survey. It could be argued that this stimulus is a valuable teaching tool and can improve student training. The result that study participation enhanced interpersonal skills tallies well with findings from the literature.Reference Davies and Lunn12

On reflection, the choice of assessment categories could have been clearer, with ‘definitely’ being replaced with ‘consistently’ and the potentially emotive ‘awful’ being replaced with ‘bad’ or ‘very poor’. This may have helped respondents to distinguish between the categories more effectively. Future iterations of the study will incorporate these changes.

There was good correlation between the three scoring systems, although the low staff response rate does cast doubt on the strength of these. Patient feedback and AURTSCAF scores had a reasonable correlation, which suggests that patient perceptions are valid measures of relative performance. Although the exceptionally high correlation between staff and patient scores is promising, this must be tempered by low response rates and the limited number of AURTSCAF grades.

Although there is good correlation for individuals, the aggregate data demonstrate that the staff consistently scored the students lower. This could be because of the aforementioned bias or perhaps the difference in roles between staff and patients. Another confounding factor is that the radiation therapists were providing general feedback relating to interactions with all the patients; whereas the patient feedback is individual and only represents 66% of the total numbers. Of course, there is no requirement for staff feedback and patient feedback to be identical, and both perspectives can feed into student self-evaluation and reflection.

Results confirmed that patients could provide a unique perspective on students’ interpersonal skills, with clinical staff sometimes being unaware of student–patient interactions. This was illustrated by 94·4% of patients reporting that the students did show an interest in their well-being; yet only 68·7% of staff agreed. Interestingly, most of the rest (28·3%) of the staff answered this as ‘Neutral’, suggesting that they lacked evidence to support a definitive answer.

Patients in general did not want to provide more in-depth feedback, although some patients had written comments on the sheets despite being asked not to. Although individual feedback would undoubtedly be extremely valuable, this process would be fraught with ethical issues including coercion, loss of anonymity, breakdown of professional relations and potential psychological damage in extreme cases. One of the strongest findings from this study is the relatively high response rate, given the absence of encouragement to submit feedback; in fact, some of the patient comments expressed very strong desires to participate.

Although a narrow range of students participated, there were some consistent emergent themes within their feedback. Quantitative data showed that students were in favour of the initiative, and qualitative comments highlighted how it improved their awareness of the importance of communication and ways in which they could improve in future placements (see Table 4). It was clear that some students had reflected on their feedback to inform goal-setting for their next placement. There were also some comments relating to motivation, with students reporting a boost in self-confidence with interpersonal skills.

Table 4 Student feedback comments

Although patient feedback provision is attractive to patients and students, demonstrating the impact of the feedback is more challenging. In the absence of evidence to demonstrate any clear educational value of this feedback, some authors have recommended caution to avoid ‘tokenism’Reference Stacey, Stickley and Rush32 and suggested further research.Reference Jha, Quinton, Bekker and Roberts15 Perhaps the best measure of the effectiveness of this initiative could be drawn from analysis of student goal-setting or students’ reflective writing.

Conclusions

In conclusion, it can be seen that radiotherapy patients are able to provide useful assessment of student interpersonal skills using an anonymous questionnaire. Patients clearly enjoyed the opportunity to provide feedback and felt that the level of engagement was appropriate. Correlation analysis suggested that patients, staff and assessors agree in their relative grading of student interpersonal skills, although patients tend to score students more generously than staff members.

Students used the feedback to help them focus on communication, set clear goals and enhance motivation to communicate. The feedback supports the supposition that staff cannot always witness patient interactions and that patient feedback can help students to evidence the full array of their skills.

Although the method is logistically feasible, inevitably data collection on a large scale would have some impact on the departments and patients; therefore, the tool should be used strategically at key points in the education programme. Given the impact of study participation on student attitudes to communication, the tool has value in teaching and not just feedback. This also suggests that deployment at strategic intervals would be valuable and future iterations will determine the most appropriate points for this.

Acknowledgements

The authors acknowledge the assistance and support from clinical staff at Premion Oncology, Gold Coast.

Financial Support

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Conflicts of Interest

None.

Sources of Support

None.

References

1.Winterbottom, A E, Jha, V, Melville, Cet al. A randomised controlled trial of patient led training in medical education: protocol. BMC Med Educ 2010; 10 (1): 90.Google Scholar
2.Sayed-Hassan, R M, Bashour, H N, Koudsi, A Y. Patient attitudes towards medical students at Damascus University teaching hospitals. BMC Med Educ 2012; 12 (1): 13.CrossRefGoogle ScholarPubMed
3.Park, J H, Son, J Y, Kim, S, May, W. Effect of feedback from standardized patients on medical students’ performance and perceptions of the neurological examination. Med Teach 2011; 33 (12): 10051010.CrossRefGoogle ScholarPubMed
4.Myung, S J, Kang, S H, Kim, Y Set al. The use of standardized patients to teach medical students clinical skills in ambulatory care settings. Med Teach 2010; 32 (11): 467470.Google Scholar
5.Muir, D, Laxton, J C. Experts by experience; the views of service user educators providing feedback on medical students’ work based assessments. Nurs Educ Today 2012; 32 (2): 146150.CrossRefGoogle ScholarPubMed
6.Jha, V, Quinton, N D, Bekker, H L, Roberts, T E. What educators and students really think about using patients as teachers in medical education: a qualitative study. Med Educ 2009; 43 (5): 449456.Google Scholar
7.Fadhilah, M, Oda, Y, Emura, Set al. Patient satisfaction questionnaire for medical students’ performance in a hospital outpatient clinic: a cross-sectional study. Tohoku J Exp Med 2011; 225 (4): 249254.Google Scholar
8.Braend, A M, Gran, S F, Frich, J C, Lindbaek, M. Medical students clinical performance in general practice triangulating assessments from patients, teachers and students. Med Teach 2010; 32 (4): 333339.Google Scholar
9.Barr, J, Ogden, K, Rooney, K. Viewpoint: let's teach medical students what patient partnership in clinical practice can be, with the involvement of educationally engaged patients. Int J Cons Stud 2010; 34 (5): 610612.CrossRefGoogle Scholar
10.Stockhausen, L J. The patient as experience broker in clinical learning. Nurs Educ Pract 2009; 9 (3): 184189.Google Scholar
11.Schlegel, C, Woermann, U, Shaha, M, Rethans, J, van der Vleuten, C. Effects of communication training on real practice performance: a role-play module versus a standardized patient module. J Nurs Educ 2012; 51 (1): 1622.CrossRefGoogle ScholarPubMed
12.Davies, C S, Lunn, K. The patient's role in the assessment of students’ communication skills. Nurs Educ Today 2009; 29 (4): 405412.Google Scholar
13.Shawler, C. Standardized patients: a creative teaching strategy for psychiatric-mental health nurse practitioner students. J Nurse Educ 2008; 47 (11): 528531.Google Scholar
14.Marken, P A, Zimmerman, C, Kennedy, C, Schremmer, R, Smith, K V. Human simulators and standardized patients to teach difficult conversations to interprofessional health care teams. Am J Pharm Educ 2010; 74 (7): 120.CrossRefGoogle ScholarPubMed
15.Jha, V, Quinton, N D, Bekker, H L, Roberts, T E. Strategies and interventions for the involvement of real patients in medical education: a systematic review. Med Educ 2009; 43 (1): 1020.Google Scholar
16.Towle, A, Bainbridge, L, Godolphin, Wet al. Active patient involvement in the education of health professionals. Med Educ 2010; 44 (1): 6474.Google Scholar
17.Haffling, A C, Håkansson, A. Patients consulting with students in general practice: survey of patients’ satisfaction and their role in teaching. Med Teach 2008; 30 (6): 622629.Google Scholar
18.Stickley, T, Stacey, G, Pollock, K, Smith, A, Betinis, J, Fairbank, S. The practice assessment of student nurses by people who use mental health services. Nurs Educ Today 2010; 30 (1): 2025.Google Scholar
19.Dearnley, C, Coulby, C, Rhodes, C, Taylor, J, Coates, C. Service users and carers: preparing to be involved in work-based practice assessment. Innov Educ Teach Int 2011; 48 (2): 213222.Google Scholar
20.Mercer, L M, Tanabe, P, Pang, P Set al. Patient perspectives on communication with the medical team: pilot study using the communication assessment tool-team (CAT-T). Patient Educ Couns 2008; 73 (2): 220223.CrossRefGoogle ScholarPubMed
21.Reinders, M E, Blankenstein, A H, Knol, D L, de Vet H, C W, van Marwijk, H W J. Validity aspects of the patient feedback questionnaire on consultation skills (PFC), a promising learning instrument in medical education. Patient Educ Couns 2009; 76 (2): 202206.Google Scholar
22.Williamson, K, Pope, E L, Mundy, L A, White, E. Learning from patients - an example of service user involvement in radiotherapy education. Clin Oncol 2009; 21 (3, S1): 253.Google Scholar
23.PMETB. Patients’ Role in Healthcare - The Future Relationship Between Patient and Doctor. London: Postgraduate Medical Education and Training Board, 2008.Google Scholar
24.Howe, A, Anderson, J. Involving patients in medical education. Brit Med J 2003; 327 (7410): 326328.CrossRefGoogle ScholarPubMed
25.Morris, P, McGoverin, A, Symons, J. Preparing for patient-centred practice: developing the patient voice in health professional learning. In: Bradbury H, Kilminster S, Zukas M (eds). Beyond Reflective Practice. Oxford: Routledge, 2010: 104119.Google Scholar
26.Frisby, R. User involvement in mental health branch education: client review presentations. Nurs Educ Today 2001; 21 (8): 663669.Google Scholar
27.Giles, E, Chiswell, M, Wright, C, Bridge, P, Charlton, N. A survey to evaluate the implementation of a national clinical assessment form. Radiographer 2012; 59 (3): 7784.Google Scholar
28.Andrew, S, Salamonson, Y, Everett, B, Halcomb, E J, Davidson, P M. Beyond the ceiling effect: using a mixed methods approach to measure patient satisfaction. Int J Mult Res App 2011; 5 (1): 5263.Google Scholar
29.Campbell, C, Lockyer, J, Laidlaw, T, Macleod, H. Assessment of a matched-pair instrument to examine doctor-patient communication skills in practising doctors. Med Educ 2007; 41 (2): 123129.Google Scholar
30.Makoul, G, Krupat, E, Chang, C H. Measuring patient views of physician communication skills: development and testing of the communication assessment tool. Patient Educ Couns 2007; 67 (3): 333342.Google Scholar
31.Lie, D, Encinas, J, Stephens, F, Prislin, M. Do faculty show the ‘halo effect’ in rating students compared with standardized patients during a clinical examination? Internet J Fam Pract 2010; 8 (2).Google Scholar
32.Stacey, G, Stickley, T, Rush, B. Service user involvement in the assessment of student nurses: a note of caution. Nurse Educ Today 2012; 32 (5): 482484.CrossRefGoogle ScholarPubMed
Figure 0

Table 1 Perceptions of feedback

Figure 1

Table 2 Overall ratings of student interpersonal skills

Figure 2

Table 3 Correlation analysis for the three scoring methods

Figure 3

Table 4 Student feedback comments