Introduction
Current literature has frequently identified deficiencies in emergency preparedness as well as performance in previous disasters, raising concerns for a broad lack of system preparedness for future mass casualty incidents (MCI). Reference Lyznicki, Subbarao, Benjamin and James1–3 Annual Emergency Preparedness Training (EPT) programs allow clinicians to learn and refine disaster management approaches to protect both patients and healthcare professionals in the event of an MCI, and have been associated with improved performance in actual MCI events. Reference Resnick4–Reference Walls and Zinner6 There are scant competency-based EPT programs in the United States with non-uniform curriculum and a paucity of objective data suitable for actionable evaluation. Reference Subbarao, Lyznicki and Hsu7–Reference Hsu, Thomas, Bass, Whyne, Kelen and Green9 Few programs are deliberately designed to evaluate long-term individual disaster knowledge retention, continuing engagement with training or contributions to local hospital emergency operations plans. Reference Scott, Swartzentruber, Davis, Maddux, Schnellman and Wahlquist10,Reference Scott, Carson and Greenwell11 With an increased incidence of global health disasters, 12,Reference Tekola, Myers, Lubroth, Plee, Calistri and Pinto13 it is imperative that disaster preparedness educators apply evidence-based methodologies to guide deliberate design and evaluation of training programs, to ensure they result in significant, durable learning, and increased levels of engagement and readiness.
Logic models have been successfully used to evaluate programs in the fields of public health and medical education. Reference Evers, Jones, Iverson and Caputi14–Reference Sitaker, Jernigan, Ladd and Patanian16 Well-designed logic models provide a transparent program framework to enable design collaboration and articulate extractable performance outcomes to guide program implementation, evaluation and continuous quality improvement. Reference Loberti and Dewsbury17–Reference Armstrong and Barsion19 The use of a logic model projects outcomes beyond immediate, direct learning, to measure intermediate, and long-term impact. Reference Kellogg Foundation20
The objective of the MCI Foundations program was to examine the use of a logic model to align a training program’s activities and assumed inputs with its intended effects over time, to determine whether the use of the model facilitated significant, durable learning, and increased engagement and impact.
Methods
Study Design
This was an open-label, observational study following the reporting model of Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist. MCI Foundations was a 1-day comprehensive 8-hour experiential workshop, held on November 13, 2019 at the Montefiore Einstein Center for Innovation in Simulation (MECIS). The training was developed by the MCI Foundations Committee, which consisted of disaster medicine experts, simulation educators and clinical educators. The study was reviewed and determined exempt by the Montefiore Medical Center Institutional Review Board (IRB), #2019-10614.
This program’s metrics for success were sustained improvements in disaster medicine knowledge and comfort with core disaster competencies, pursuit of further study in emergency preparedness, and involvement of participants in their individual hospital’s disaster management.
Participants
Participants were recruited by electronic mail sent to chief medical officers, directors of emergency medicine, and vice presidents of nursing across all affiliated hospitals in the Montefiore Network by the director of emergency management at Montefiore as well as by members of the MCI Foundations Committee. Inclusion criteria for invitation to the program included full-time employment in the Montefiore Health System (academic and non-academic centers) with interest in disaster medicine, but little to no expertise. Priority was offered to employees from the departments of emergency medicine, surgery and intensive care. Residents, students and part-time employees were excluded from invitation. 30 slots were available in a “first-come, first served” system. Participation was voluntary and oral informed consent was obtained with each participant having the ability to withdraw at any time.
Implementation of the Logic Model
The use of a logic model for program design begins with the definition of intended short-term, intermediate and long-term program outcomes; these are important for EPT programs to demonstrate impact. The next stage in applying a logic model is to identify the activities and outputs that will enable the identified outcomes. For the MCI Foundations program, this stage involved the actual curriculum design and development. Once activities and outputs are defined, then inputs, or required materials, and resources can be identified. The inputs, activities, outputs and outcomes identified for this training are shown in Table 1, along with the tool designed to measure each outcome.
Table 1. Logic model of MCI foundations emergency preparedness training program
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221209140823548-0712:S1935789321000665:S1935789321000665_tab1.png?pub-status=live)
Learning Design
In order to enable significant, durable learning, the MCI Foundations Committee aimed to ground the training in relevant, profession-related activities by articulating requirements as entrustable professional activities (EPAs) and support skills and knowledge acquisition with meaningful personal connections by using Fink’s Taxonomy of Significant Learning to articulate learning outcomes and define learning experiences.
EPAs are units of professional practice, defined as tasks or responsibilities, to be entrusted to an unsupervised trainee for execution once they have attained sufficient specific competence. Reference Ten Cate21 The praxis focus of EPAs ensures the curriculum design drives toward operationalized skills and knowledge.
The MCI Foundations training was designed to enable the following EPAs:
-
1) Identify a potential critical event, appropriate safety precautions for that event type, and perform the appropriate simulated notification and actions for mobilization.
-
2) Identify the Incident Command System (ICS) defined individual task and scope of responsibility.
-
3) Correctly utilize the Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) principles of “Brief, Huddle, and Debrief” during a mass casualty exercise.
-
4) Apply knowledge and skills concerning MCI triage systems to rapidly assign victims to appropriate triage categories.
The training focused on the following scope of content to enable these EPAs:
-
Critical event safety principles (“Safety”)
-
The hospital incident command system and the participant’s role in it (“HICS”)
-
Effective critical event communications (“TeamSTEPPS”)
-
Knowledge of the MCI Triage system (“MCI Triage”)
Table 2 illustrates how a single MCI Foundations learning outcome correlates to an EPA and a core competency, Reference Hsu, Thomas, Bass, Whyne, Kelen and Green9 and is aligned to a selected-response assessment item. This program contained a complete list of targeted learning outcomes aligned to EPAs and healthcare worker disaster training competencies. Further information on our curriculum is outlined in the Supplementary material.
Table 2. An MCI foundations learning outcome with correlated EPA and core competency
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221209140823548-0712:S1935789321000665:S1935789321000665_tab2.png?pub-status=live)
Abbreviations: EPA, Entrustable Professional Activity.
Fink’s Taxonomy of Significant Learning was used to articulate the targeted learning outcomes and design appropriate learning and assessment activities. Fink’s Taxonomy is designed to encourage curriculum developers to go beyond a content-driven approach to course design and consider how to affect significant durable change in the learner. Reference Fink22 Fink’s Taxonomy is composed of dynamically interacting dimensions that consider the learner’s relationship with the content, themselves and their own learning and others. The dimensions of Fink’s Taxonomy are: (1) Foundational Knowledge (2) Application (3) Integration (4) Human Dimension (5) Caring (6) Learning how to learn. Organizing the outcomes and learning activities around Fink’s Taxonomy kept the MCI Foundations team focused on making meaningful connections to the learners’ lives and emphasizing the personal and professional value of becoming a resource for their communities in preparation for a mass casualty event. Table 3 contains the activities that comprised the training, organized by Fink’s dimensions.
Table 3. MCI foundations activities according to Fink’s taxonomy of significant learning
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221209140823548-0712:S1935789321000665:S1935789321000665_tab3.png?pub-status=live)
Inputs
MCI Foundations was funded internally by the Office of Emergency Management at Montefiore Health. Funding covered the following: facility space, consumable, and non-consumable equipment, as well as actors, and moulage. Faculty time was considered on a volunteer basis.
Activities
Exactly 3 weeks prior to the workshop, participants completed a pre-test to establish their baseline disaster medicine knowledge and familiarity, followed by completing online modules. The in-person workshop was an 8-hour immersive program held at the Montefiore Einstein Center for Innovation in Simulation (MECIS), that covered panel discussions, table-top exercises, a personal reflection, and 2 large scale disaster simulation exercises involving multiple high and low fidelity as well as professional actors and volunteer ambulance crews.
Outcome Measures
Short-term Knowledge
Disaster medicine knowledge was measured using a 16-question selected-response assessment, administered before the training, immediately following the training, and 3 months later. The test answers were not reviewed during, or after the program to limit bias on future assessments. The test was developed by the Principal Investigator (FJ) and First Author (ND), internally validated by 4 internationally recognized disaster medicine experts with a combined 90 years of training and with experience in actual MCI events, and then reviewed by the MCI Foundations Committee. The assessment covered 4 main categories of disaster medicine knowledge: Hospital incident command System (HICS), MCI triage, Safety assessment, and Communication (using the TeamSTEPPS model). HICS questions were designed following the Federal Emergency Management Agency (FEMA) hospital incident command system (ICS) guidelines; MCI triage questions were developed following the Simple Triage and Rapid Treatment (START) and JumpSTART, the pediatric disaster triage algorithm, and protocols; TeamSTEPPS questions were created following guidelines set forth by the Agency for Healthcare Research and Quality in collaboration with the U.S. Department of Defense; safety assessment questions were created based on principles put forth by FEMA in its Incident Command System courses. 23 While TeamSTEPPS does not traditionally play a role in Disaster Management programs, the MCI Foundations Committee felt teaching Crisis Resource Management (CRM) in such a setting would be an integral part of the program.
The disaster medicine knowledge pre-test and post-test scores were compared with a paired-t test. The effect size for mean percentage of pre-test and post-test was measured using a Cohens D. This study expected training participants to score at least 67% on the post-test, which was administered immediately following the training. This goal was determined by consensus from the MCI Foundations team using the strategy of averaging scores of the pilot test obtained by 4 independent disaster medicine experts (83%) to 6 independent emergency department physicians without prior disaster management training (51%). The MCI Foundations committee determined if the program could improve the average score to 67% (an increase by 31%), the group would consider the foundational program successful in improving baseline knowledge.
Short Term Comfort
Comfort with core disaster competencies was assessed using a questionnaire with a 5-point linear numeric response format (1 = Very Uncomfortable, 5 = Very Comfortable) relating to fundamental competencies in disaster preparedness, HICS, TeamSTEPPS, emergency preparedness, and triage. The group’s Pre and Post mean scores in these categories were all compared using Wilcoxon signed rank tests and the effect size was measured with the Wilcoxon test effect size (r).
Intermediate Outcome
To measure intermediate progress toward the MCI Foundation goal of encouraging participants to pursue self-learning activities and engage in their hospital’s emergency operations plans, traffic to a website created specifically for the program was monitored during 3 months, from the workshop to retention testing. The website contained resources for MCI and information for all regional and local upcoming meetings in our hospital network, upcoming course offerings, seminars, interactive learning games and online disaster educational opportunities. The link was sent out immediately upon conclusion of the program and again 3 months later.
Long-term Outcomes
In order to assess the acquisition and maintenance of any benefits from the program on specific individuals over time, the group’s knowledge and comfort scores were compared with paired subject analyses at pre, post and retention. Knowledge and comfort assessments included the same questions as those used in the post-test. Total knowledge scores were compared using one-way repeated measures ANOVA to evaluate pre-test, post-test, and retention scores on paired subjects. Post Hoc tests were conducted using pairwise t-test comparisons to analyze the mean total score differences.
Paired subjects for the comfort self assessments at pre, post and retention were compared using a Friedman’s test. Post Hoc data was conducted using Wilcoxon signed ranks tests to determine the difference between individual effects and identify any skill decay after the program. The Friedman test effect size for mean percentage of pre/post/retention was measured using Kendall’s W, which follows the interpretation guidelines of Cohen’s D effect size test.
Results
All statistical analyses were performed in R 4.0.0 (R Foundation for Statistical Computing, Vienna, Austria). Short-term, intermediate and long-term outcomes were evaluated in order to assess the efficacy of the program (Table 1). The effect of this program on knowledge and comfort were assessed using paired samples between the pre-assessments and post-assessments. For the participants who were retained in all 3 of the testing phases, paired samples were compared across pre-test, post-test and retention.
Outputs
A total of 25 frontline staff participated in this program from 8 different hospitals within the Montefiore Hospital Network in the southern NY region that included community (64%) and academic (36%) centers. These 25 participants had varying clinical experience and included 10 physicians (40%), 7 registered nurses (28%), 3 advanced practice providers (12%), 2 nursing technicians (8%), 2 administration roles (8%) and 1 respiratory therapist (4%). 5 participants (20%) reported previous training in mass casualty incidents. Participants predominantly worked in the Department of Emergency Medicine (23; 92%), followed by the Department of Surgery (1; 4%) and the Intensive Care Unit (1; 4%). There were 25 requests for 30 available slots, so all those who responded with interest were admitted.
The 15 instructors who participated in the workshop included 5 disaster management experts covering 2 hospital systems as well as a representative from the New York State Division of Homeland Security and Emergency Services, 10 instructors with backgrounds in Medical Simulation and Healthcare Education, alongside 8 non-medical volunteers assisted in the program.
Short Term Knowledge
The mean percentage of the group’s total knowledge-based scores improved from pre-test to post-test (51.3% vs. 63.8%, P < 0.001; 95% CI: 0.075-0.174; d = 1.034). The breakdown of the total scores demonstrated improvements of knowledge in the mean score of each category: TeamSTEPPS (35.1% to 50.7%), HICS (47.3% to 62.0%), MCI Triage (59.8% to 68.8%), and Safety (59.0% to 70.0%).
Short Term Comfort
2 participants were excluded from the pre/post familiarity score analysis because they did not complete the comfort self-assessment before or directly after the course (Figure 2). The mean scores for all categories of comfort in core MCI competencies significantly increased from pre-test to post-test: disaster preparedness (2.6 vs. 3.5, P < 0.001; 95% CI: 0.999-1.500, r = 0.77), Triage (2.8 vs. 3.9, P < 0.001; 95% CI:0.999-1.50, r = 0.80), the HICS framework (2.3 vs. 3.3, P < 0.001; 95% CI: 0.999-1.999, r = 0.75), Team STEPPS (2.2 vs 3.7, P < 0.001; 95% CI:1.00-2.00, r = 0.84), and emergency preparedness (2.7 vs. 3.6, P < 0.001; 95% CI: 1.50- 2.00, r = 0.76).
Intermediate Outcome
In the 3 months following the workshop, our website measured a total of 64 views amongst 53 different devices. The website was emailed both after the program and in the 3-month follow up to all participants and program instructors.
Long-term Outcomes
Among the 14 participants (56.0%) who completed the 3-month survey, 7 participants (50%) reported engaging in further self-directed learning. A total of 7 participants (50.0%) assisted in updating the disaster response plan for their hospital, 6 participants (42.9%) attended a local disaster meeting, and 7 people (50.0%) took an extra disaster preparedness course or workshop. Of these participants, 5 (35.7%) said they would not have attended further disaster workshops if it was not for this course and 9 (64.3%) reported further pursuing their self-reported learning objectives that they wrote in this program with “time” being the most reported barrier to their disaster goals (n = 2, 14.3%). When assessing only the participants who completed all 3 of the testing phases, this program had a significant effect on knowledge and comfort over the course of 3 months.
Long-term Knowledge
During all 3 time periods, 2 participants were excluded from the disaster medicine knowledge analysis for not completing the assessments. The mean percentage of the retention group’s (n = 12) total knowledge-based scores improved across pre, post and retention testing (53.2% vs. 64.8% vs 67.6%, P < 0.05). These scores exceeded this programs 67% score predetermined threshold for post and retention testing. Post hoc comparisons demonstrated that the mean total score difference was significant between pre-test and post-test (P < 0.05, d = 0.99) and pre-test and retention (P < 0.05, d = 0.80), but not between post-test and retention (P > 0.05, d = 0.17). Together, these data suggest that this program has a large effect on total MCI knowledge after the program with a continued moderate effect at 3 months retention (Figure 1).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221209140823548-0712:S1935789321000665:S1935789321000665_fig1.png?pub-status=live)
Figure 1. Assessment of disaster knowledge in the total group (left) and in only the group that was retained throughout the whole study period (right). Testing periods relate to pre-workshop, post-workshop, and retention 3 months after workshop. Higher scores relate to better performance. Bold lines refer to median scores, horizontal lines refer to the 1st and 3rd quartiles in the distribution, whiskers represent the lowest and highest datum within 1.5 interquartile range of the lower and upper quartiles, and dots represent outliers. *P < 0.05 **P < 0.01 ***P < 0.001 ****P < 0.0001.
Long-term Comfort
A total of 14 participants (56%) completed the comfort self-assessment after 3 months. However, 1 participant was excluded from the analysis at all 3 time periods due to an incomplete pre-course comfort assessment. The mean comfort scores in core MCI competencies in disaster, triage and HICS increased across pre, post and retention self-assessments relating to: disaster preparedness (2.69 vs. 3.62 vs. 3.85, P < 0.01, W = 0.62), Triage (2.92 vs. 3.85 vs. 4.00, P < 0.01, W = 0.54) and HICS (2.46 vs 3.46 vs 3.85, P < 0.01, W = 0.72). Post hoc comparisons demonstrated that the score differences in each of these categories were significant between pre-test and post-test (P < 0.05) and pretest and retention (P < 0.05), but not between post-test and retention (P > 0.05). The mean comfort score in overall emergency preparedness and TeamSTEPPS both increased after the workshop, but the scores remained the same from post testing to retention for overall emergency preparedness, and decreased for TeamSTEPPS (Overall Emergency Preparedness: 2.77 vs. 3.77 vs. 3.77, P < 0.01, W = 0.56; TeamSTEPPS: 2.62 vs. 4.08 vs. 4.00, P < 0.01, W = 0.73). Post hoc comparisons demonstrated that the pre to post improvements for both of these categories significantly differed (P < 0.05), and the decline from post to retention testing for TeamSTEPPS was not significant (P > 0.05).
Discussion
The MCI Foundations program successfully implemented an outcomes-based logic model in the inception, implementation, and assessment of emergency preparedness training (EPT). The study’s successful application of the model to the design and evaluation of this program enables targeted improvements with long-term impact as the program continues to expand towards a more prepared hospital network. It also provides an early example for future emergency preparedness program designers to utilize, and improve upon to create high-quality, transparent, outcomes-based programs.
The authors observed a short-term improvement in disaster knowledge immediately after the program compared to baseline knowledge (51.3% vs. 63.8%); however, this data did not meet the predetermined 67% success mark, indicating a need for program improvement in this area. As all aspects of the curriculum and assessment design were aligned to specific learning objectives and knowledge outcomes, this design allowed for targeted improvements for future programs.
In addition to the novelty of the logic model, this study is unique in measuring long-term retention of knowledge and comfort with core competencies following EPT programs. Subgroup analysis on the individuals who were retained throughout the entire program (pre-test, post-test, and retention) demonstrated that, while their post test average was 65%, their retention average met the 67% threshold for success, suggesting further self-study, and improvement in disaster preparedness in this group. It is noteworthy that 50% of the retention group engaged in disaster response planning at their hospital.
This retention group had higher scores on average compared to the entire program group in knowledge and in all categories of comfort after the program. These scores improved on post testing without significant decay (Figure 2). These findings suggest the MCI Foundations program, which is designed to support durable learning, played a role in the reduction of decay of disaster knowledge and comfort with core competencies in this subgroup. This is significant given that an MCI can occur anytime and makes a compelling argument that future studies of disaster preparedness training should include long-term outcomes’ metrics. Importantly, this program utilized a curriculum deliberately designed to foster significant, durable learning in participants through personally meaningful and professionally relevant activities and assessments developed using EPAs and Fink’s Taxonomy of Significant Learning. The application of these frameworks to guide curriculum development warrants further research in the context of clinical disaster education.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221209140823548-0712:S1935789321000665:S1935789321000665_fig2.png?pub-status=live)
Figure 2. Assessment of comfort in individual MCI competencies. Testing periods relate to pre-workshop, post-workshop, and retention 3 months after workshop. (Scale: 1= not comfortable, 5= very comfortable). *P < 0.05 **P < 0.01 ***P < 0.001.
Limitations
This program encountered a number of limitations and challenges in its design, implementation, and evaluation. Using a logic model as an evaluation tool requires alignment of specific metrics to outcomes. This alignment was problematic in the case of intermediate outcomes related to individual learner commitments to continuing education. By monitoring traffic on a website sent to participants after the program, it was discovered that participants were accessing resources for course offerings and disaster planning meetings; however, the authors noted sparse attendance at these meetings and only 7 participants (50%) in the retention group reported engaging in further learning or helping their hospital develop a disaster response plan. Still, the authors observed 1 participant in the study rewrote much of her hospital’s critical event annex in the emergency operations plan and has since enrolled in a Masters in Emergency Management, demonstrating a need for more individualized observations. While the MCI Foundations program cannot take credit for this career trajectory, this individual observation is precisely the reason Fink’s taxonomy was utilized as the backbone structure for the program. The current study was unable to measure the impact participants had in their home institutions. The low retention rate prevented this study from understanding the full impact of the intervention utilized. Only 14 participants responded to the retention survey, some of which were incomplete. Given that the pool of self-identified disaster novices in this program’s hospital network is likely over 10000 employees, and this study only enrolled 25, this program is subject to both selection bias and attrition bias. Also, the logic model was not designed to directly measure impact, but rather had a quantitative function for involvement, and attendance-only, that was proposed to lead to impact. This led the MCI Foundations Committee to consider future directions of focusing on individual hospitals at a time using a similar model to further understand potential impact.
Conclusion
The first iteration of the MCI Foundations model provided the committee with a transparent framework for the design, implementation, evaluation and continuous quality improvement of a competency-based EPT program. While successful in evaluating targeted improvements with participant long-term disaster knowledge and comfort, future models will consider the impact of individual learners on their hospital’s disaster preparedness rather than simply the number who engage. Future work should further implement this model within individual hospitals for more customized, system-based training and evaluation to understand the multitude of benefits offered from the MCI Foundations model.
Acknowledgments
We would like to acknowledge additional faculty for MCI Foundations: Ed Tangredi CEM, Erik Larsen MD, and Nadine Macura MBA CEM as content specialists, and support for the program and panelists; Bernadette Amicucci DNS, MBA, FNP-BC, CNE, Marc J Gibber MD, and David DiMattia MS as faculty instructors; Marvin Fried MD and Emily Kaplan MPA for facilities and technical support; and Stefan Ravello and Steven Haimowitz MD for online module support from RealCME, Inc.
Ethics statement
The study was reviewed and determined exempt by the Montefiore Medical Center Institutional Review Board (IRB).
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/dmp.2021.66