Introduction
Out-of-hours first-on-call ENT cover is often provided by junior doctors with little ENT experience. A survey in 2009 reported that 68 per cent of respondents covering ENT had no previous experience, and 74 per cent of respondents were covering more than one specialty at night.Reference Biswas, Rafferty and Jassar 1 It is these doctors who are often the first to treat patients with ENT emergencies that may rapidly become life-threatening, requiring skilled and immediate management. Of particular concern, confidence in airway emergency management is often poor in first-on-call ENT doctorsReference Awad and Pothier 2 and junior doctors working in the emergency department.Reference Whitcroft, Moss and McRae 3
The General Medical Council states that new doctors should be offered relevant and quality induction. 4 Demand for ENT departmental induction training is high. The number of junior doctors requiring ENT induction annually will exceed 100 in most UK regional deaneries, although this may represent only 2 to 3 doctors in each hospital at any one time.Reference Smith, Trinidade and Tysome 5 Therefore, the concept of ENT induction centralisation to one centre, for all clinicians within a region, may be an attractive option for hospitals.
Simulation-based learning facilitates the transfer of procedural and complex team management skills. Emergency medicine training has paved the way for utilising mannequin-simulated scenarios to train junior clinical staff, with evidence supporting the advantages of simulation in traumaReference Williams, Lockey and Culshaw 6 and paediatrics.Reference Eppich, Adler and McGaghie 7 We recently published a randomised controlled trial demonstrating that replacement of traditional lecture-based training with a mixture of lectures and emergency scenario simulation is more effective at preparing junior doctors for ENT emergencies.Reference Smith, Navaratnam, Jablenska, Dimitriadis and Sharma 8
Ericsson's theory of achieving expertise indicates that expert performance can be obtained by actively engaging in the deliberate practice of particular tasks with immediate feedback.Reference Ericsson 9 A recent meta-analysis found that this approach enhanced learners’ clinical knowledge and skills more than traditional methods. Using this theoretical principle as a guide to enhance clinical expertise, some medical education programmes in North America have translated the concept of simulation with deliberate practice into ‘boot camp’ courses.Reference McGaghie, Issenberg, Cohen, Barsuk and Wayne 10
In the Health Education East of England region, an intensive ENT boot camp was developed to train participants in the assessment and management of patients with life-threatening ENT emergencies via mannequin-simulated scenarios.Reference Smith, Trinidade and Tysome 5 An initial study with 18 participants demonstrated improved knowledge in multiple choice question assessment and positive feedback from participants.Reference Smith, Trinidade and Tysome 5 Based on the results of this work and a participant goal-setting exercise, we further developed the boot camp course, and aimed to validate it using clinically relevant objective measures and long-term participant feedback.
Materials and methods
Trial design
This study was conducted at the Department of Otolaryngology, Head and Neck Surgery, Addenbrooke's Hospital, Cambridge, UK. Independent approval was granted by a research ethics committee and local training programme directors. A prospective, single-blinded design was used.
Participants
Doctors were recruited by national advertisement to attend a simulation-based course for ENT emergencies, with a focus on those providing ENT out-of-hours care. Consent was obtained from candidates to use their responses and measured performance for educational research. Participants were assigned to groups of five, with groups matched for ENT experience and level of medical training. Each group contained two to three foundation trainees and two higher trainees. There were no exclusion criteria.
Curriculum design
A course curriculum was created to cover ENT conditions that require immediate or early intervention. This incorporated the emergency aspects of the requirements for entry into ENT specialist training and the Diploma of Otolaryngology, Head and Neck Surgery. 11 , 12 The systematic assessment and management principles taught in advanced life support and advanced trauma life support courses were made central to the material taught.Reference Perkins and Lockey 13 , 14
Upon application to the course, participants were asked to rank their preferred course content and teaching method from very desirable (rank 1) to not desirable (rank 11) (Table I). Participant preference for methods of teaching practical skills, lecture content and feedback methods were then translated into components of the course, and those which scored poorly were excluded.
* n = 37. †Participants were asked to rank learning styles to help influence course design, from very desirable (rank 1) to not desirable (rank 11) (median values are presented).
During the 1-day course, participants received focused lectures and small group skills training sessions delivered by an ENT specialist trainee or consultant in the morning session. The afternoon consisted of four simulated emergency scenarios (epistaxis, post-laryngectomy care, post-tonsillectomy haemorrhage and neck trauma).
Lectures covered the systematic approach to critically ill patients, and basic otology, rhinology, airway and paediatric ENT. The corresponding practical skills sessions covered basic examination and equipment-handling skills in otology, epistaxis, flexible endoscopy, and tracheostomy and laryngectomy care.
The simulation sessions were centred around mannequins, with resuscitation and specialist ENT equipment provided. In pairs, and in front of their small teaching group, the participants were presented with background information on an emergency case and then worked through the scenario. Participants were encouraged to act on changes in the patient's condition in as realistic a manner as possible. Verbal feedback and summary teaching was provided following completion of the task.
Outcome measures
Kirkpatrick's Hierarchy of Evaluation was used to assess the learning outcomes using a four-level model.Reference Kirkpatrick and Kirkpatrick 15 Level one evaluates how participants respond to their training, level two measures the acquisition of confidence, level three evaluates the degree to which participants apply what they learned during training when they return to their professional practice, and level four measures the degree to which targeted outcomes (in this case performance when managing an emergency) occur as a result of the training.
Participant-evaluated outcomes
Questionnaires were designed to reflect the Kirkpatrick evaluation method (assessing level one, two and three outcomes). Course participants were given the questionnaires upon application to the course, immediately on course completion, and two to four months later. Perceived confidence was assessed, divided into seven key topics (epistaxis, blocked tracheostomy, stridor, airway foreign body, post-operative problems, neck trauma, neck examination and flexible nasendoscopy). Questions were formatted using a 5-point numerical Likert scale (1 = strongly disagree/not confident, 5 = strongly agree/very confident), to give a maximum confidence score of 65 for the confidence assessment. The application questionnaire contained additional items to evaluate the candidate's preferred learning styles, ENT experience and educational requirements. The course completion questionnaire contained extra items to assess the participant's enjoyment of the course.
Blinded assessor evaluated outcomes
In addition to the training scenarios, two video-recorded simulated scenarios were evaluated at the start and end of the course (assessing level four outcomes). The recordings were reviewed by two consultant-level otolaryngologists who were not present at the course, and who were blinded as to whether the scenarios were performed before or after the training. A 15-point, scenario-specific scoring scheme was used over 4 key areas (diagnosis; systematic approach; airway, breathing and circulation; ongoing management) (Appendices 1 and 2). For each of the 15 points, the blinded assessors awarded a score of 0 (not achieved), 1 (achieved with prompting or incomplete) or 2 (achieved), to give a maximum total score of 30.
Data analysis
The median was calculated for ordinal variables. Participant confidence was compared between groups at each time point (before the course, at the end of the course, and two to four months after the course) using a Mann–Whitney test. The composite video-evaluation scores were compared (before the course and at the end of the course) using a Mann–Whitney test. To estimate mean inter-rater reliability, Spearman's coefficient was calculated. Data were analysed in Prism 7.0 software (GraphPad Software, San Diego, California, USA).
Results
Participant demographics and baseline characteristics
Thirty-seven junior doctors were enrolled in the course, with the largest proportion planning for a career in emergency medicine (Table I). ENT experience of emergency scenarios prior to the course was low; only 13.5 per cent of participants had managed a total of five patients from any of the seven ENT scenarios and examinations shown in Table II. Only one participant (2.7 per cent) had previously worked in an ENT post.
The composite scores are represented by a median value. The maximum score for individual confidence was 5 and the maximum score for overall confidence was 65.
* n = 37; † n = 37; ‡ n = 16. **p < 0.01, as compared to application confidence scores.
Response to training
Participants rated the course highly in terms of: achieving their learning needs, emergency lectures and overall enjoyment (median scores: 5, 5 and 5). In addition, participants felt they had acquired new skills and would recommend the course to colleagues (median scores: 5 and 5).
Confidence
Overall confidence scores (Figure 1) and confidence in the individual key topics demonstrated significant improvement following course completion (p < 0.0001) (Table II).
Changes in professional practice
All 37 participants were contacted 2–4 months following the course (mean, 3.2 months) and 16 (43 per cent) online questionnaires were completed. The confidence scores showed a sustained significant improvement compared with the pre-course scores (p < 0.0001) (Figure 1, Table II). In addition, participants reported that they were still applying aspects of the course to their clinical practice (Table III).
* At two to four months following course completion (n = 16).
Blinded assessor evaluations
There was a strong positive correlation between the scores given by the assessors (R = 0.603). Participants performed significantly better during the end-of-course scenarios, as scored by assessors blinded to the timing of evaluation (9.75 vs 18.75 out of a maximum score of 30; p = 0.0093). Participants showed improvement in all four key areas: diagnosis (1.75 vs 5.0 out of a maximum score of 6; p = 0.0054); systematic approach (2.0 vs 3.0 out of a maximum score of 4; p = 0.0203); airway, breathing, circulation (‘ABC’) assessment (5.5 vs 8.25 out of a maximum score of 14; p = 0.0178); and ongoing management (2.25 vs 3.5 out of a maximum score of 6; p = 0.0294) (Figure 2).
Discussion
Preparing healthcare professionals adequately is regarded as essential to enhancing patient safety. As a result, the Best Evidence Medical Education initiative promotes the need to develop medical courses using an evidence-based approach.Reference Harden, Grant, Buckley and Hart 16 An established principle from industry and Best Evidence Medical Education is that course development is a dynamic process of assessment, design and implementation, with emphasis placed on adequate evaluation. 17 We used these principles for the boot camp course, and present all four stages in this article.
Boot camp concept
Boot camps are focused courses designed to enhance learning through the use of multiple educational methods, including simulation, with a focus on deliberate practice. A recent meta-analysis concluded that boot camps are a highly effective training method that is becoming established in medical education.Reference Blackmore, Austin, Lopushinsky and Donnon 18 Only one ENT boot camp was included in this meta-analysis, with candidate-reported confidence the sole measured outcome.Reference Malekzadeh, Malloy, Chu, Tompkins, Battista and Deutsch 19
Needs assessment and course design
The average time spent in ENT at medical school is 1.5 weeks.Reference Mace and Narula 20 Many doctors managing ENT emergencies and complications will have no further post-graduate training, and it is this deficit in preparation that this course aims to fill. In order to identify participant requirements, upon application to the course, individuals completed a needs-based learning assessment to determine their baseline skills, experience and desired learning style. The course curriculum and structure were defined according to the needs-based assessment and relevant aspects of the ENT curriculum. An educational programme was designed, in which small group learning was promoted, using group feedback (as opposed to individual feedback) to foster an effective and enjoyable learning environment. The course was designed to facilitate adult learning principles of directed practice, as promoted by Ericsson.Reference Ericsson 9
Target group and generalisability
The course is targeted at junior doctors who will encounter ENT emergencies, either as the first-on-call doctor for ENT or in the emergency department. In the UK, these doctors typically rotate through departments every four to six months. The doctors consist of a broad range of foundation, general practice and specialist trainees. The boot camp is run on a tri-annual basis to coincide with each four-month rotation, enabling participants from multiple hospitals to receive an effective and high-quality induction. By running the course at one centre for all participants within a region, cost and faculty time used can be kept to a minimum, making this method of induction an attractive option for smaller hospitals.
The participants enrolled in our boot camp were representative of those that should be targeted if the course were to be more widely introduced. They were all entering posts that entailed regular contact with ENT patients, and were inexperienced in managing ENT emergencies, with correspondingly low pre-course confidence. The majority of participants did not wish to pursue a career in ENT surgery; the largest proportion of participants were emergency department trainees. These doctors are often overlooked when specialist ENT training is provided and are unlikely to be involved with more advanced and time-consuming courses. However, this group is fundamental to ensuring patient safety. Many patients with ENT conditions requiring urgent treatment are initially reviewed by an emergency medicine physician, and may then be discharged from the emergency department. As an example, one audit found that 63 per cent of epistaxis patients presenting to the emergency department at a large teaching hospital were seen only by an emergency medicine doctor.Reference Eze, Lo and Toma 21 It is therefore essential that adequate training is provided to this cohort of doctors, to improve emergency management and patient outcomes.
Validation process and outcomes
The Kirkpatrick methodology was adopted as a rigorous method of evaluation to ensure the course is fit for purpose. This is widely used as a validation process in industry and it is supported by the Best Evidence Medical Education initiative.Reference Harden, Grant, Buckley and Hart 16 The four-part model comprises a series of evaluation levels on which to focus assessment, and it encourages the analysis of participant performance and healthcare outcomes. Many educational evaluations focus only on the lower levels of this hierarchy, with an analysis of 305 medical education papers finding that only 1.6 per cent of studies evaluated level four healthcare outcomes.Reference Belfield, Thomas, Bullock, Eynon and Wall 22 Positive findings in levels three and four are more difficult to both achieve and measure; however, they are key in encouraging sustained changes to practice and improved clinical care.Reference Kirkpatrick and Kirkpatrick 15
As it was not practical to observe participants in real clinical scenarios, in order to assess level four outcomes, we adopted the best alternative of assessment during high-fidelity simulation of a clinical case. In our study, blinded assessors evaluated simulated clinical performance using marking points based on advanced life support and advanced trauma life support principles, adapted by the senior authors for the specific ENT emergency. Although not a validated score, the strong inter-rater reliability and spread of results suggests this method was capable of discriminating by participant ability, and the blinded nature of the assessment reduces bias. Different evaluation methods could have been employed, such as written examinations or assessment of a single aspect of clinical examination. However, the former was ranked poorly on the pre-course needs assessment (median rank 10th of 11), and the latter fails to assess how skills are practically applied to a clinical scenario.
The results indicate that the boot camp performs well at all four levels of Kirkpatrick's hierarchy, both immediately following and at two to four months after the course. A recent meta-analysis illustrated that learners who completed boot camps had significantly improved skills, knowledge and perceived confidence in treating a variety of medical specialties.Reference Blackmore, Austin, Lopushinsky and Donnon 18 In addition to replicating these findings, we identified a significant improvement in the clinically relevant level four outcome across all performance areas following the course. The improvement was most pronounced in terms of diagnostic skill and use of a systematic approach. These positive findings provide the validation to allow us to recommend the course's introduction more widely, building on work conducted in other specialties.
Course limitations
The boot camp focuses on ENT emergencies, as although these make up a small part of the specialty workload, the potential for morbidity and mortality from poor management is high. The major limitation to the course is therefore the lack of teaching regarding other aspects of ENT, such as audiogram interpretation and basic surgical skills. The level of simulation equipment was also basic, and more advanced simulations could be developed using complex electronic mannequins or cadaveric tissue, but this would be more suited to higher-level surgical trainees. Currently, the boot camp is run as an intensive 1-day course, and inclusion of the latter would require more time, making it less desirable to candidates in terms of cost, transport and study leave.
Validation process limitations
Kirkpatrick level four outcomes are ideally clinical outcome measures, but this level of assessment is difficult to apply in educational research. Mannequin simulation scenarios were used as the best available surrogate for assessing performance in a clinical setting, though this does not enable evaluation of the impact on clinical patient outcomes. It is recognised that this evidence is very difficult to obtain. Furthermore, despite the well-established nature of the advanced trauma life support course, a Cochrane review yielded insufficient evidence from controlled trials that advanced trauma life support impacts positively on patient outcomes.Reference Jayaraman, Sethi, Chinnock and Wong 23
-
• Junior doctors who cover or cross-cover ENT may have limited training or experience
-
• Simulation-based training and boot camp methodology have been effective in other medical fields
-
• Simulated patient outcomes and participant confidence were improved immediately following and two months after the course
-
• An ENT boot camp for junior doctors is enjoyable for participants and feasible to run, ideally as part of a centrally co-ordinated programme
While the before and after course assessments could be applied to 100 per cent of individuals, the 43 per cent response rate for the long-term follow-up questionnaire, completed after two to four months, was suboptimal. This is perhaps not surprising given that it was administered via e-mail and completed on a voluntary basis. It appears reasonable to assume that the sample obtained is representative, but bias in the group responding cannot be excluded.
Conclusion
Using established principles of course design and evaluation, we have demonstrated that the ENT boot camp, an intensive 1-day course combining lectures, small group and simulation training, is an effective and desirable method of teaching junior doctors ENT emergency management. The structured educational experience is tailored to participant requirements, and our results suggest that it improves confidence and induces sustained changes in professional practice and performance. It is designed to be reproducible, and we recommend that similar courses are adopted as part of induction programmes for ENT, possibly facilitated by central co-ordination.
Acknowledgement
We acknowledge all invited faculty who taught at the boot camp to ensure a consistent high quality of teaching.
Appendix 1. Fifteen-point standardised mark scheme for epiglottitis video-assessed simulation scenarios
D = suspects the correct diagnosis and initiates emergency care; S = uses a systematic approach; ABC = assesses and treats airway, breathing and circulation; M = reassesses patient and arranges ongoing management
Unsatisfactory = 0
Borderline/examiner prompt required = 1
Satisfactory = 2
Appendix 2. Fifteen-point standardised mark scheme for post-thyroidectomy haematoma video-assessed simulation scenarios
D = suspects the correct diagnosis and initiates emergency care; S = uses a systematic approach; ABC = assesses and treats airway, breathing and circulation; M = reassesses patient and arranges ongoing management
Unsatisfactory = 0
Borderline/examiner prompt required = 1
Satisfactory = 2