Introduction
Achieving and maintaining a high level of emergency preparedness is a major challenge for hospitals because of their role in emergency response. Equally challenging is how to assess hospital emergency readiness. Over the past decade, there have been numerous initiatives and tools developed by The Joint Commission (TJC; Oakbrook Terrace, Illinois USA), the US Department of Homeland Security (DHS; Washington, DC USA), the US Department of Health and Human Services (DHHS; Washington, DC USA), the World Health Organization (WHO; Geneva, Switzerland), and the US Department of Veterans Affairs (VA; Washington, DC USA) to help hospitals become better prepared for major disasters. Approaches to enhance hospitals’ emergency preparedness include: continuing education of health care professionals;Reference Der-Martirosian, Radcliff, Gable, Riopelle, Hagigi, Brewster and Dobalian 1 , Reference Sauer, McCarthy, Knebel and Brewster 2 compilation of lessons learned from disaster exercises or emergency drills;Reference Buyman, Dubruiel, Torghele, Alperin and Miner 3 implementation of community-focused, emergency preparedness training programs;Reference Ferrer, Ramirez, Sauser, Iverson and Upperman 4 , Reference Levy, Rokusk, Bragg and Howell 5 and development of objectively-measured preparedness capabilities.Reference Wang, Wei and Xiang 6 , Reference Barbera, Yeatts and Macintyre 7
To date, there is limited agreement about what constitutes effective hospital emergency preparednessReference Adini, Laor, Cohen, Zadok and Bar-Dayan 8 and no widely-accepted, validated tool for measuring it.Reference Barbera, Yeatts and Macintyre 7 , Reference Adini, Godlberg, Cohen and Bar-Dayan 9 - Reference Jenkins, Kelen, Sauer, Fredericksen and McCarthy 12 Furthermore, there is considerable variation in the protocols and tools that exist for assessing a hospital’s emergency management capabilities.Reference Kaji, Langford and Lewis 11 , Reference Jenkins, Kelen, Sauer, Fredericksen and McCarthy 12 Some instruments are more comprehensive, allowing for the measurement of hospital all-hazards preparedness, while a number are hazard-specific, such as a tool to assess the quality of standard operating procedures (SOPs) for pandemic influenza.Reference McCarthy, Brewster, Hsu, Macintyre and Kelen 13 - Reference Kaji, Koenig and Bey 22 Other instruments are designed with a more specific focus, such as the evaluation of functional exercises,Reference Legemaate, Burkle and Bierens 23 incident command centers,Reference Kaji, Langford and Lewis 11 or other elements of preparedness; examples include the Agency for Healthcare Research and Quality (AHRQ; Rockville, Maryland USA) Disaster Drill Evaluation ToolReference Adini, Laor, Hornik-Lurie, Schwartz and Aharonson-Daniel 10 , Reference Ingrassia, Prato and Geddo 24 and the Association of State and Territorial Health Officials’ (ASTHO; Arlington, Virginia USA) National Health Security Preparedness Index.Reference Kaji and Lewis 25
Often, multiple approaches are employed as part of the same assessment. Assessments frequently start with a pre-site survey during which a hospital’s emergency manager is asked to complete a questionnaire that gages staff training and skills as well as the availability of equipment and supplies. Then, during a subsequent site visit, evaluators may use structured questionnaires and checklists to examine key equipment, space, and staff skills.Reference Kaji, Langford and Lewis 11 , Reference Macintyre, Barbera and Brewster 16 - Reference Kaji, Koenig and Bey 22 Hospital preparedness assessments also may include drills or functional exercises to evaluate facility performance during a simulated mass-casualty emergency (MCE).Reference Legemaate, Burkle and Bierens 23 - 26
Given the dramatic increase in the frequency and intensity of natural weather-related, technological, infectious disease, and human-caused disaster events during the past decade,Reference Barbisch and Koenig 27 - Reference Hay and Mimura 29 there is a significant need for reliable and valid methods to measure hospital preparedness capabilities. In recognition of this need, the VA’s Office of Emergency Management (OEM; Martinsburg, West Virginia USA) developed and implemented a Comprehensive Emergency Management Program (CEMP) in 2004. The VA CEMP is aimed at ensuring the resiliency, continuity, and rapid recovery of VA health care services and facilities during disasters and other potential disruptions to health care service delivery. 30 The VA is the largest health care system in the United States and has a mission to ensure emergency preparedness to assist veterans and their communities in times of disasters. The VA is divided into 18 geographic regions called Veterans Integrated Services Networks that currently include 152 VA Medical Centers (VAMCs) and 749 Community-based Outpatient Clinics located throughout the United States.
In 2004, the assessment of the VA’s level of “all-hazards preparedness” began with a survey of VAMCs using a self-administered hospital preparedness questionnaire modified from a survey tool developed for AHRQ and the Health Resources and Services Administration (HRSA; Rockville, Maryland USA). The VA CEMP program used: (1) findings from this survey; (2) a review of the relevant literature; (3) an examination of pertinent industry and governmental standards and guidelines; and (4) consultations with subject-matter experts to develop protocols and tools to assess each VAMC. The overall design was influenced heavily by the Institute of Medicine’s (IOM; Washington, DC USA) report on the Metropolitan Medical Response System.Reference Dobalian, Callis and Davey 31 The VA’s CEMP assessments began in 2007 with a pilot development phase and were then fully implemented in two successive phases: 2008-2010 (“Phase I”) and 2011-2013 (“Phase II”). This article reports descriptive analyses of CEMP hospital assessments in both phases and assesses change in hospital preparedness over time.
Methods
Descriptive analyses of data on the emergency capabilities, or “all-hazards preparedness,” for 137 VAMCs were assessed in both phases (Phase I: 2008-2009 and Phase II: 2011-2013) for the CEMP program. The development of the VA’s “all-hazards preparedness” assessment tool started in 2004 where data were collected using a modified questionnaire from the AHRQ and the HRSA. Findings from this survey were combined with a review of the relevant literature, an examination of pertinent industry and governmental standards and guidelines, and consultations with subject-matter experts to develop VA-sponsored protocols and tools designed to assess CEMP at each VAMC. During each phase, data were collected by a team of experts who travelled to each VAMC and assessed each hospital’s emergency readiness through observation, demonstration, document review, and interviews with key staff. The team of experts who conducted the site visits consisted of: a team leader, who had a hospital director/administration background; a health care system engineer; a physician; a nurse; and a health care system emergency manager. Most of these individuals were former VA employees contracted with for this purpose. Training of the assessors was conducted in 2008 and as new staff joined the cadre. The team leader was responsible for ensuring assessors understood their role and the assessment process. For detailed discussion on the development of the assessment tools and the process of data collection for the two phases, see the authors’ related article, in press.Reference Manning and Goldfrank 32
The CEMP assessment included six critical Mission Areas (MAs) as essential components: Program Management; Incident Management; Safety and Security; Resiliency and Continuity; Medical Surge; and Support to External Requirements (Table 1). Each MA included a set number of emergency preparedness capabilities. A five-point Likert-like scale (5=exemplary, 4=excellent, 3=developed, 2=being developed, and 1=needs attention) was used initially to indicate the final score for each capability, but was collapsed into a dichotomous variable for the current analysis of readiness and readiness changes over time: not-developed=0 (being developed or needs attention) versus developed=1 (exemplary, excellent, or developed). Since VA leadership is interested in knowing which capabilities are at least developed compared to those that are not, the three categories of “developed,” “exemplary,” and “excellent” were collapsed as one single category and the categories of “not developed” and “needs attention” as the other category, indicating that improvement or development of that specific capability was needed. This allowed the research team to assess the relative percentage of facilities that were meeting the capability standards compared to those that were not.
Table 1 Detailed Description of Six CEMP Mission AreasReference Manning and Goldfrank 32
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-84247-mediumThumb-S1049023X16001266_tab1.jpg?pub-status=live)
Abbreviations: CEMP, Comprehensive Emergency Management Program; MA, Mission Area; VA, Veterans Affairs.
The sample and items for the present analysis included 137 VAMCs and 61 unique capabilities. While the CEMP assessment largely covered the same items and hospitals in both phases, there were some differences in the sample and items. Two VAMCs were assessed in Phase I, but not in Phase II, and three VAMCs were assessed in Phase II, but not in Phase I. The number of capabilities that were assessed in each phase and the number of capabilities that were assessed in both phases are presented in Table 2. In total, 69 capabilities were assessed in Phase I, 71 were assessed in Phase II, and 65 capabilities were assessed in both phases. However, four capabilities had missing assessment data for more than 50% of VAMCs; these capabilities assessed systems or processes that were not applicable to all VAMCs. Even though the majority of capabilities apply to all VAMCs, not all VAMCs are the same in level of complexity, setting, and role. For example, rural VAMCs have fire departments whereas urban VAMCs do not. The data analysis, therefore, included 61 capabilities (Table 2) that were applicable to all 137 assessed VAMCs.
Table 2 Number of Capabilities Included in Each Phase and in the Analysis
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-33499-mediumThumb-S1049023X16001266_tab2.jpg?pub-status=live)
To describe changes in preparedness over time, four categories were used. A new variable was created from the two (Phase I and Phase II) dichotomous variables: (1) rated “developed” in both phases (no change/stayed developed); (2) rated “not-developed” in Phase I but rated “developed” in Phase II (improved/became developed); (3) rated “not-developed” in both phases (no change/never developed); and (4) rated “developed” in Phase I but rated “not-developed” in Phase II (worsened/became undeveloped). The results are displayed in tabular format as well as graphically (Figures 1A through 1F) for each MA. It is important to note that in this case, “developed” meant the specific capability either met or went above and beyond the required industry standards.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-42445-mediumThumb-S1049023X16001266_fig1g.jpg?pub-status=live)
Figure 1A Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 1 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-38336-mediumThumb-S1049023X16001266_fig2g.jpg?pub-status=live)
Figure 1B Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 2 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-22924-mediumThumb-S1049023X16001266_fig3g.jpg?pub-status=live)
Figure 1C Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 3 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-96744-mediumThumb-S1049023X16001266_fig4g.jpg?pub-status=live)
Figure 1D-1 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.1 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-72916-mediumThumb-S1049023X16001266_fig5g.jpg?pub-status=live)
Figure 1D-2 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.2 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-31350-mediumThumb-S1049023X16001266_fig6g.jpg?pub-status=live)
Figure 1D-3 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.3 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-61443-mediumThumb-S1049023X16001266_fig7g.jpg?pub-status=live)
Figure 1E Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 5 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-42967-mediumThumb-S1049023X16001266_fig8g.jpg?pub-status=live)
Figure 1F Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 6 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.
Results
The data analysis illustrated the four ratings of change (or no change) in hospital preparedness between the two phases. For each capability in each MA, Figures 1A through 1F graphically display the detailed breakdown of percentage of VAMCs for the four categories of change between Phase I and Phase II: (1) stayed developed (desired outcome); (2) became developed (desired outcome); (3) never developed (undesired outcome); and (4) declined from developed to not-developed (undesired outcome). Table 3 summarizes the findings into three columns illustrating the number of capabilities by percent of VAMCs for the combined two desired outcomes: 80%+, 70%-80%, and<70% of VAMCs stayed or became developed. The 70%-80% criterion was selected since it was the range of median scores for the percent distributions of all 61 capabilities across all 137 VAMCs.
Table 3 Number of Capabilities by Percent of VAMCs for Two Desired Outcomes (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170130042924583-0828:S1049023X16001266:S1049023X16001266_tab3.gif?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
Table 4A Mission Area 1 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-90379-mediumThumb-S1049023X16001266_tab4.jpg?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
Table 4B Mission Area 2 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-37534-mediumThumb-S1049023X16001266_tab5.jpg?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
Table 4C Mission Area 3 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-04516-mediumThumb-S1049023X16001266_tab6.jpg?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
Table 4D Mission Area 4.1, 4.2, 4.3 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-32788-mediumThumb-S1049023X16001266_tab7.jpg?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
Table 4E Mission Area 5 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-66292-mediumThumb-S1049023X16001266_tab8.jpg?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
Table 4F Mission Area 6 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170130043448-60737-mediumThumb-S1049023X16001266_tab9.jpg?pub-status=live)
Abbreviation: VAMC, Veterans Affairs Medical Center.
For 33 of the 61 capabilities, 80% or more of VAMCs were rated either developed in both phases or became developed in Phase II. For 14 capabilities, 70%-80% of VAMCs were rated developed in both phases or became developed in Phase II. The remaining 14 capabilities were identified as “low-performing,” defined as less than 70% of VAMCs achieved the desired outcomes (see Table 3).
Tables 4A through 4F further illustrate the data for each capability by MA. For each table, the low-performing capabilities are listed in the last column. For Program Management (MA 1 – Table 4A), three low-performing capabilities were identified: capability 1.5 (Incorporation of Comprehensive Mitigation Planning into the Facility’s Emergency Management Program); 1.7 (Incorporation of Continuity Planning into the Activities of the Facility’s Emergency Management Program to Ensure Organizational Continuity and Resiliency of Mission Critical Functions, Processes, and Systems); and 1.11 (Incorporation of a Range of Exercise Types that Test the Facility’s Emergency Management Program). For Incident Management (MA 2 –Table 4B), one item 2.1.4 (Management of Extended Incident Operations) was identified as low-performing. For Safety and Security (MA 3 –Table 4C), there were four low-performing capabilities: 3.1.2 (Processes and Procedures for Sheltering-in-Place); 3.1.3 (Processes and Procedures for Sheltering Family of Critical Staff); 3.3 (Processes and Procedures for Managing a Hazardous Substance Incident); and 3.4.3 (Processes and Procedures for Staff and Family Mass Prophylaxis during an Infectious Outbreak [ie, Influenza]). For Resiliency and Continuity (MA 4 – Table 4D), the three low-performing capabilities were: 4.2.4 (Development, Implementation, Management, and Maintenance of an Emergency Water Conservation Plan); 4.2.6 (Maintaining Sewage and Waste Resiliency); and 4.1.1 (Transporting Critical Staff to the Facility during an Emergency). For Medical Surge (MA 5 – Table 4E), capabilities 5.2 (Management of External Volunteers and Donations during Emergencies), 5.3.4 (Integration of Patient Reception, Surge, and Decontamination Teams), and 5.3.6 (Processes and Procedures for Control and Coordination of Mass Fatality Management) were identified as low-performing capabilities. There were no low-performing capabilities identified for Support to External Requirements (MA 6 – Table 4F).
Discussion
To date, there is no clear consensus how to assess hospital preparedness. A number of challenges around comparing readiness across facilities have been discussed in the literature, including the lack of consistent standards to ensure different institutions are reporting equivalent measures.Reference Hick, Koenig, Barbisch and Bey 18 Moreover, there is a dearth of published data on the impact of hospital preparedness on actual hospital performance during MCEs, or the appropriateness of mitigation and preparedness structures and processes on actual evacuations or efforts to shelter-in-place. Several researchers have recommended that hospitals adopt a more general all-hazards approach in their preparedness plans, supplemented with hazard-specific elements that account for facility-specific challenges (eg, decontamination and isolation).Reference Barbera, Yeatts and Macintyre 7 , Reference Adini, Laor, Cohen, Zadok and Bar-Dayan 8 , Reference Kaji and Lewis 15 The VA’s adaptable, all-hazards-based approach used in the CEMP process could serve as a potential model for hospitals outside of the VA because it would address these limitations. Even if health care personnel are trained, the SOPs for a generic emergency scenario could help them handle other emergencies.Reference Adini, Godlberg, Cohen and Bar-Dayan 9
The quantitative analyses of the two phases of CEMP data indicate an overall improvement in the level of hospital preparedness for each capability. These improvements might be due to lessons learned from Phase I recommendations, technical assistance and support provided by the OEM, or heightened awareness of what to expect during the second round of assessments. Regardless of the cause, the improvements represent improved preparedness in areas deemed critical for hospitals by emergency management practitioners and other experts.
The observed improvements underscore the importance of measurement because assessing these capabilities likely contributed to the observed improvements in preparedness between Phase I and Phase II. Measuring emergency preparedness in hospitals can lead to improvements by: (1) preventing the overuse, underuse, and misuse of resources for preparedness and response, and ensuring patient safety during and after disasters; (2) identifying what practices do and do not work in emergency management to drive improvement; (3) holding hospitals accountable for providing timely access to high-quality patient care during and after disasters; and (4) measuring and addressing disparities in how care is delivered during and after disasters.
These findings identified “low-performing” capabilities for 14 items, ranging from zero to three capabilities per MA (Tables 4A-4F). Various reasons may explain why some capabilities did not have the desired outcome. For some items, the expectations, standards, or guidelines were not fully developed at the time of the assessment. For example, item 4.2.6 (Maintaining Waste & Sewage Resiliency) was “low-performing” because space was not available at some facilities to provide complete back-up capability to maintain waste and sewage. Item 4.1.1 (Transporting Critical Staff to the Facility during Emergencies) also was recognized as a low-performing capability; as such, there have been efforts by VA leadership since the Phase II assessments to better publicize under what circumstances government vehicles may be used to transport staff during emergencies. Similarly, for item 2.1.4 (Management of Extended Incident Operations), efforts at some VAMCs had historically focused on the management of incidents in the short-term rather than long-term. Fully addressing this capability will require additional investments in facilities for space, training of leadership staff on the use of the Incident Command System, and awareness of the requirements of “proxy events” as part of exercise scenarios. Finally, two Program Management (MA 1) capabilities (1.7 and 1.11) were identified as areas that would require additional resources to improve performance. Other “low-performing” capabilities are receiving similar attention and review by the VA to determine remedies or steps for improvement. Furthermore, this “low-performing” capabilities analysis was used by OEM as part of its decision-making process when determining whether to approve requests from VAMCs for funds to make improvements in preparedness.
In addition to helping facilities identify areas of concern, these assessments exposed non-emergency management staff and hospital leadership to emergency management issues, and highlighted that emergency management is both a collaborative process and a shared responsibility. With regard to specific capabilities, there were items over which emergency managers had direct control (eg, alerting and warning systems, incident command, coordination and communications, and overall program structure and management) and there were others over which they had little direct control (eg, infrastructure resiliency, medical surge, on-site fire departments, research centers, access to cash, and home health care) and involved coordination with other departments within the facility or with outside community partners. This distinction highlights the importance of collaboration between different departments within a facility, as well as between the facility and outside agencies. These processes and relationships need to be established well in advance of an event to minimize disruptions to care.
Limitations/Future Assessments
The study had limitations. First, the tool has not been rigorously validated or thoroughly tested for reliability. Given the substantial input from content experts, including health system emergency managers, emergency physicians and nurses, engineers, infection control practitioners, safety officers, and leadership during its development, the VA CEMP hospital assessment tool has face validity and construct validity.Reference Manning and Goldfrank 32 The tool also appears to have some degree of reliability because the modifications between the two phases were relatively minor and VAMCs had consistent overall ratings between the two phases. However, the reliability of the tool cannot be assessed fully because inter-rater reliability among assessors was not evaluated systematically. It should be noted that the assessors did participate in the same trainings prior to assessing the VAMCs. Finally, although detailed guidelines were developed for the scoring rubric, assessors sometimes relied on achieving consensus for determining the final score for some capabilities. This process may have inappropriately introduced subjectivity into the scoring process, though there is no information to determine the extent to which this occurred or whether the scores would be systematically higher or lower without consensus scoring.
Although subsets of the 61 assessed capabilities (eg, support to external missions) are not applicable to non-VA facilities, most capabilities should be applicable to non-VA facilities. Ideally, assessments of facility preparedness would go beyond system and process measures and also link assessments to outcome measures. However, due to the uncommon and often unique nature of disasters, it may be difficult or impossible to establish such linkages. Accordingly, assessing performance during exercises and drills may be the best alternative. It is possible that large, integrated delivery systems like the VA may be able to assess outcomes in some areas. 30
It also should be noted that the degree to which a hospital is prepared for emergencies is dependent on several key factors, including the availability and flexibility of financial and human resources, organizational location, frequency of past emergencies and threat of seasonal emergencies, and overall organizational culture. The relationship between organizational culture and operational decisions has been the subject of studies by medical sociologists, providing evidence on the relationship between organizational culture and performance in a hospital settingReference Dobalian, Stein and Radcliff 33 which can be extended to how hospitals perform during major emergencies. The VA recognizes the value of emergency preparedness, evidenced through the VA Strategic Plan, Goal 3, Objective 3.5: Ensure preparedness to provide services and protect people and assets continuously and in time of crisis. This assessment process was designed to provide a formative assessment for VAMCs to use in improving their comprehensive emergency management programs, and by leadership to better understand the current status and strategic requirements for preparedness of the VA health care system.Reference Jacobs, Mannion, Davies, Harrison, Konteh and Walshe 34 The VA began to collect data for Phase III of the assessment program in 2015, which will continue the process and program to assess hospital response capabilities. Before Phase III of the CEMP assessment was fielded, the metrics and processes from earlier phases were evaluated critically and refined. As such, the program has been able to improve the assessment tool and processes in a dynamic framework. The all-hazards-based tool used to assess hospital preparedness in this study was derived from generally accepted standards and, with some modification, could be adapted for non-VA hospitals. The need for such a tool is particularly apparent at a time when the Centers for Medicare and Medicaid Services (CMS; Baltimore, Maryland USA) has issued a rule to enact new preparedness requirements on Medicare- and Medicaid-participating health care providers, including hospitals. 35
Conclusion
These findings from quantitative analysis of CEMP assessments for two phases indicated an overall improvement in hospital preparedness scores over time, which reinforces the value of conducting such assessments. The lack of consensus on how to measure hospital preparedness remains problematic since there are no consistent standards that can be applied to all hospitals to ensure different institutions are reporting equivalent measures. More studies are needed to create valid and reliable measures that can be applied to all hospitals, but the CEMP offers one model that has been implemented and refined for use in VA facilities with potential applicability to non-VA hospitals.