Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T13:41:13.186Z Has data issue: false hasContentIssue false

Assessing Hospital Disaster Readiness Over Time at the US Department of Veterans Affairs

Published online by Cambridge University Press:  14 December 2016

Claudia Der-Martirosian*
Affiliation:
Veterans Emergency Management Evaluation Center, US Department of Veterans Affairs, North Hills, CaliforniaUSA
Tiffany A. Radcliff
Affiliation:
Veterans Emergency Management Evaluation Center, US Department of Veterans Affairs, North Hills, CaliforniaUSA Department of Health Policy & Management, School of Public Health, Texas A&M University, College Station, TexasUSA
Alicia R. Gable
Affiliation:
Veterans Emergency Management Evaluation Center, US Department of Veterans Affairs, North Hills, CaliforniaUSA
Deborah Riopelle
Affiliation:
VA Health Services Research and Development Center for the Study of Healthcare Innovation, Implementation, and Policy, VA Greater Los Angeles Healthcare System, North Hills, CaliforniaUSA
Farhad A. Hagigi
Affiliation:
Veterans Emergency Management Evaluation Center, US Department of Veterans Affairs, North Hills, CaliforniaUSA Department of Family Medicine, School of Medicine, University of California-Los Angeles, Los Angeles, CaliforniaUSA Anderson School of Management, University of California-Los Angeles, Los Angeles, CaliforniaUSA
Pete Brewster
Affiliation:
Department of Veterans Affairs, Veterans Health Administration, Office of Emergency Management, Martinsburg, West VirginiaUSA
Aram Dobalian
Affiliation:
Veterans Emergency Management Evaluation Center, US Department of Veterans Affairs, North Hills, CaliforniaUSA Department of Health Policy and Management, Fielding School of Public Health, University of California-Los Angeles, Los Angeles, CaliforniaUSA School of Nursing, University of California-Los Angeles, Los Angeles, CaliforniaUSA
*
Correspondence: Claudia Der-Martirosian, PhD 16111 Plummer St. MS-152 North Hills, California 91343 USA E-mail: Claudia.Der-Martirosian@va.gov
Rights & Permissions [Opens in a new window]

Abstract

Introduction

There have been numerous initiatives by government and private organizations to help hospitals become better prepared for major disasters and public health emergencies. This study reports on efforts by the US Department of Veterans Affairs (VA), Veterans Health Administration, Office of Emergency Management’s (OEM) Comprehensive Emergency Management Program (CEMP) to assess the readiness of VA Medical Centers (VAMCs) across the nation.

Hypothesis/Problem

This study conducts descriptive analyses of preparedness assessments of VAMCs and examines change in hospital readiness over time.

Methods

To assess change, quantitative analyses of data from two phases of preparedness assessments (Phase I: 2008-2010; Phase II: 2011-2013) at 137 VAMCs were conducted using 61 unique capabilities assessed during the two phases. The initial five-point Likert-like scale used to rate each capability was collapsed into a dichotomous variable: “not-developed=0” versus “developed=1.” To describe changes in preparedness over time, four new categories were created from the Phase I and Phase II dichotomous variables: (1) rated developed in both phases; (2) rated not-developed in Phase I but rated developed in Phase II; (3) rated not-developed in both phases; and (4) rated developed in Phase I but rated not- developed in Phase II.

Results

From a total of 61 unique emergency preparedness capabilities, 33 items achieved the desired outcome – they were rated either “developed in both phases” or “became developed” in Phase II for at least 80% of VAMCs. For 14 items, 70%-80% of VAMCs achieved the desired outcome. The remaining 14 items were identified as “low-performing” capabilities, defined as less than 70% of VAMCs achieved the desired outcome.

Conclusion:

Measuring emergency management capabilities is a necessary first step to improving those capabilities. Furthermore, assessing hospital readiness over time and creating robust hospital readiness assessment tools can help hospitals make informed decisions regarding allocation of resources to ensure patient safety, provide timely access to high-quality patient care, and identify best practices in emergency management during and after disasters. Moreover, with some minor modifications, this comprehensive, all-hazards-based, hospital preparedness assessment tool could be adapted for use beyond the VA.

Der-MartirosianC, RadcliffTA, GableAR, RiopelleD, HagigiFA, BrewsterP, DobalianA. Assessing Hospital Disaster Readiness Over Time at the US Department of Veterans Affairs. Prehsop Disaster Med. 2017;32(1):46–57.

Type
Original Research
Copyright
© World Association for Disaster and Emergency Medicine 2016 

Introduction

Achieving and maintaining a high level of emergency preparedness is a major challenge for hospitals because of their role in emergency response. Equally challenging is how to assess hospital emergency readiness. Over the past decade, there have been numerous initiatives and tools developed by The Joint Commission (TJC; Oakbrook Terrace, Illinois USA), the US Department of Homeland Security (DHS; Washington, DC USA), the US Department of Health and Human Services (DHHS; Washington, DC USA), the World Health Organization (WHO; Geneva, Switzerland), and the US Department of Veterans Affairs (VA; Washington, DC USA) to help hospitals become better prepared for major disasters. Approaches to enhance hospitals’ emergency preparedness include: continuing education of health care professionals;Reference Der-Martirosian, Radcliff, Gable, Riopelle, Hagigi, Brewster and Dobalian 1 , Reference Sauer, McCarthy, Knebel and Brewster 2 compilation of lessons learned from disaster exercises or emergency drills;Reference Buyman, Dubruiel, Torghele, Alperin and Miner 3 implementation of community-focused, emergency preparedness training programs;Reference Ferrer, Ramirez, Sauser, Iverson and Upperman 4 , Reference Levy, Rokusk, Bragg and Howell 5 and development of objectively-measured preparedness capabilities.Reference Wang, Wei and Xiang 6 , Reference Barbera, Yeatts and Macintyre 7

To date, there is limited agreement about what constitutes effective hospital emergency preparednessReference Adini, Laor, Cohen, Zadok and Bar-Dayan 8 and no widely-accepted, validated tool for measuring it.Reference Barbera, Yeatts and Macintyre 7 , Reference Adini, Godlberg, Cohen and Bar-Dayan 9 - Reference Jenkins, Kelen, Sauer, Fredericksen and McCarthy 12 Furthermore, there is considerable variation in the protocols and tools that exist for assessing a hospital’s emergency management capabilities.Reference Kaji, Langford and Lewis 11 , Reference Jenkins, Kelen, Sauer, Fredericksen and McCarthy 12 Some instruments are more comprehensive, allowing for the measurement of hospital all-hazards preparedness, while a number are hazard-specific, such as a tool to assess the quality of standard operating procedures (SOPs) for pandemic influenza.Reference McCarthy, Brewster, Hsu, Macintyre and Kelen 13 - Reference Kaji, Koenig and Bey 22 Other instruments are designed with a more specific focus, such as the evaluation of functional exercises,Reference Legemaate, Burkle and Bierens 23 incident command centers,Reference Kaji, Langford and Lewis 11 or other elements of preparedness; examples include the Agency for Healthcare Research and Quality (AHRQ; Rockville, Maryland USA) Disaster Drill Evaluation ToolReference Adini, Laor, Hornik-Lurie, Schwartz and Aharonson-Daniel 10 , Reference Ingrassia, Prato and Geddo 24 and the Association of State and Territorial Health Officials’ (ASTHO; Arlington, Virginia USA) National Health Security Preparedness Index.Reference Kaji and Lewis 25

Often, multiple approaches are employed as part of the same assessment. Assessments frequently start with a pre-site survey during which a hospital’s emergency manager is asked to complete a questionnaire that gages staff training and skills as well as the availability of equipment and supplies. Then, during a subsequent site visit, evaluators may use structured questionnaires and checklists to examine key equipment, space, and staff skills.Reference Kaji, Langford and Lewis 11 , Reference Macintyre, Barbera and Brewster 16 - Reference Kaji, Koenig and Bey 22 Hospital preparedness assessments also may include drills or functional exercises to evaluate facility performance during a simulated mass-casualty emergency (MCE).Reference Legemaate, Burkle and Bierens 23 - 26

Given the dramatic increase in the frequency and intensity of natural weather-related, technological, infectious disease, and human-caused disaster events during the past decade,Reference Barbisch and Koenig 27 - Reference Hay and Mimura 29 there is a significant need for reliable and valid methods to measure hospital preparedness capabilities. In recognition of this need, the VA’s Office of Emergency Management (OEM; Martinsburg, West Virginia USA) developed and implemented a Comprehensive Emergency Management Program (CEMP) in 2004. The VA CEMP is aimed at ensuring the resiliency, continuity, and rapid recovery of VA health care services and facilities during disasters and other potential disruptions to health care service delivery. 30 The VA is the largest health care system in the United States and has a mission to ensure emergency preparedness to assist veterans and their communities in times of disasters. The VA is divided into 18 geographic regions called Veterans Integrated Services Networks that currently include 152 VA Medical Centers (VAMCs) and 749 Community-based Outpatient Clinics located throughout the United States.

In 2004, the assessment of the VA’s level of “all-hazards preparedness” began with a survey of VAMCs using a self-administered hospital preparedness questionnaire modified from a survey tool developed for AHRQ and the Health Resources and Services Administration (HRSA; Rockville, Maryland USA). The VA CEMP program used: (1) findings from this survey; (2) a review of the relevant literature; (3) an examination of pertinent industry and governmental standards and guidelines; and (4) consultations with subject-matter experts to develop protocols and tools to assess each VAMC. The overall design was influenced heavily by the Institute of Medicine’s (IOM; Washington, DC USA) report on the Metropolitan Medical Response System.Reference Dobalian, Callis and Davey 31 The VA’s CEMP assessments began in 2007 with a pilot development phase and were then fully implemented in two successive phases: 2008-2010 (“Phase I”) and 2011-2013 (“Phase II”). This article reports descriptive analyses of CEMP hospital assessments in both phases and assesses change in hospital preparedness over time.

Methods

Descriptive analyses of data on the emergency capabilities, or “all-hazards preparedness,” for 137 VAMCs were assessed in both phases (Phase I: 2008-2009 and Phase II: 2011-2013) for the CEMP program. The development of the VA’s “all-hazards preparedness” assessment tool started in 2004 where data were collected using a modified questionnaire from the AHRQ and the HRSA. Findings from this survey were combined with a review of the relevant literature, an examination of pertinent industry and governmental standards and guidelines, and consultations with subject-matter experts to develop VA-sponsored protocols and tools designed to assess CEMP at each VAMC. During each phase, data were collected by a team of experts who travelled to each VAMC and assessed each hospital’s emergency readiness through observation, demonstration, document review, and interviews with key staff. The team of experts who conducted the site visits consisted of: a team leader, who had a hospital director/administration background; a health care system engineer; a physician; a nurse; and a health care system emergency manager. Most of these individuals were former VA employees contracted with for this purpose. Training of the assessors was conducted in 2008 and as new staff joined the cadre. The team leader was responsible for ensuring assessors understood their role and the assessment process. For detailed discussion on the development of the assessment tools and the process of data collection for the two phases, see the authors’ related article, in press.Reference Manning and Goldfrank 32

The CEMP assessment included six critical Mission Areas (MAs) as essential components: Program Management; Incident Management; Safety and Security; Resiliency and Continuity; Medical Surge; and Support to External Requirements (Table 1). Each MA included a set number of emergency preparedness capabilities. A five-point Likert-like scale (5=exemplary, 4=excellent, 3=developed, 2=being developed, and 1=needs attention) was used initially to indicate the final score for each capability, but was collapsed into a dichotomous variable for the current analysis of readiness and readiness changes over time: not-developed=0 (being developed or needs attention) versus developed=1 (exemplary, excellent, or developed). Since VA leadership is interested in knowing which capabilities are at least developed compared to those that are not, the three categories of “developed,” “exemplary,” and “excellent” were collapsed as one single category and the categories of “not developed” and “needs attention” as the other category, indicating that improvement or development of that specific capability was needed. This allowed the research team to assess the relative percentage of facilities that were meeting the capability standards compared to those that were not.

Table 1 Detailed Description of Six CEMP Mission AreasReference Manning and Goldfrank 32

Abbreviations: CEMP, Comprehensive Emergency Management Program; MA, Mission Area; VA, Veterans Affairs.

The sample and items for the present analysis included 137 VAMCs and 61 unique capabilities. While the CEMP assessment largely covered the same items and hospitals in both phases, there were some differences in the sample and items. Two VAMCs were assessed in Phase I, but not in Phase II, and three VAMCs were assessed in Phase II, but not in Phase I. The number of capabilities that were assessed in each phase and the number of capabilities that were assessed in both phases are presented in Table 2. In total, 69 capabilities were assessed in Phase I, 71 were assessed in Phase II, and 65 capabilities were assessed in both phases. However, four capabilities had missing assessment data for more than 50% of VAMCs; these capabilities assessed systems or processes that were not applicable to all VAMCs. Even though the majority of capabilities apply to all VAMCs, not all VAMCs are the same in level of complexity, setting, and role. For example, rural VAMCs have fire departments whereas urban VAMCs do not. The data analysis, therefore, included 61 capabilities (Table 2) that were applicable to all 137 assessed VAMCs.

Table 2 Number of Capabilities Included in Each Phase and in the Analysis

To describe changes in preparedness over time, four categories were used. A new variable was created from the two (Phase I and Phase II) dichotomous variables: (1) rated “developed” in both phases (no change/stayed developed); (2) rated “not-developed” in Phase I but rated “developed” in Phase II (improved/became developed); (3) rated “not-developed” in both phases (no change/never developed); and (4) rated “developed” in Phase I but rated “not-developed” in Phase II (worsened/became undeveloped). The results are displayed in tabular format as well as graphically (Figures 1A through 1F) for each MA. It is important to note that in this case, “developed” meant the specific capability either met or went above and beyond the required industry standards.

Figure 1A Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 1 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1B Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 2 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1C Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 3 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1D-1 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.1 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1D-2 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.2 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1D-3 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.3 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1E Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 5 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 1F Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 6 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Results

The data analysis illustrated the four ratings of change (or no change) in hospital preparedness between the two phases. For each capability in each MA, Figures 1A through 1F graphically display the detailed breakdown of percentage of VAMCs for the four categories of change between Phase I and Phase II: (1) stayed developed (desired outcome); (2) became developed (desired outcome); (3) never developed (undesired outcome); and (4) declined from developed to not-developed (undesired outcome). Table 3 summarizes the findings into three columns illustrating the number of capabilities by percent of VAMCs for the combined two desired outcomes: 80%+, 70%-80%, and<70% of VAMCs stayed or became developed. The 70%-80% criterion was selected since it was the range of median scores for the percent distributions of all 61 capabilities across all 137 VAMCs.

Table 3 Number of Capabilities by Percent of VAMCs for Two Desired Outcomes (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

Table 4A Mission Area 1 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

Table 4B Mission Area 2 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

Table 4C Mission Area 3 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

Table 4D Mission Area 4.1, 4.2, 4.3 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

Table 4E Mission Area 5 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

Table 4F Mission Area 6 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Abbreviation: VAMC, Veterans Affairs Medical Center.

For 33 of the 61 capabilities, 80% or more of VAMCs were rated either developed in both phases or became developed in Phase II. For 14 capabilities, 70%-80% of VAMCs were rated developed in both phases or became developed in Phase II. The remaining 14 capabilities were identified as “low-performing,” defined as less than 70% of VAMCs achieved the desired outcomes (see Table 3).

Tables 4A through 4F further illustrate the data for each capability by MA. For each table, the low-performing capabilities are listed in the last column. For Program Management (MA 1 – Table 4A), three low-performing capabilities were identified: capability 1.5 (Incorporation of Comprehensive Mitigation Planning into the Facility’s Emergency Management Program); 1.7 (Incorporation of Continuity Planning into the Activities of the Facility’s Emergency Management Program to Ensure Organizational Continuity and Resiliency of Mission Critical Functions, Processes, and Systems); and 1.11 (Incorporation of a Range of Exercise Types that Test the Facility’s Emergency Management Program). For Incident Management (MA 2 –Table 4B), one item 2.1.4 (Management of Extended Incident Operations) was identified as low-performing. For Safety and Security (MA 3 –Table 4C), there were four low-performing capabilities: 3.1.2 (Processes and Procedures for Sheltering-in-Place); 3.1.3 (Processes and Procedures for Sheltering Family of Critical Staff); 3.3 (Processes and Procedures for Managing a Hazardous Substance Incident); and 3.4.3 (Processes and Procedures for Staff and Family Mass Prophylaxis during an Infectious Outbreak [ie, Influenza]). For Resiliency and Continuity (MA 4 – Table 4D), the three low-performing capabilities were: 4.2.4 (Development, Implementation, Management, and Maintenance of an Emergency Water Conservation Plan); 4.2.6 (Maintaining Sewage and Waste Resiliency); and 4.1.1 (Transporting Critical Staff to the Facility during an Emergency). For Medical Surge (MA 5 – Table 4E), capabilities 5.2 (Management of External Volunteers and Donations during Emergencies), 5.3.4 (Integration of Patient Reception, Surge, and Decontamination Teams), and 5.3.6 (Processes and Procedures for Control and Coordination of Mass Fatality Management) were identified as low-performing capabilities. There were no low-performing capabilities identified for Support to External Requirements (MA 6 – Table 4F).

Discussion

To date, there is no clear consensus how to assess hospital preparedness. A number of challenges around comparing readiness across facilities have been discussed in the literature, including the lack of consistent standards to ensure different institutions are reporting equivalent measures.Reference Hick, Koenig, Barbisch and Bey 18 Moreover, there is a dearth of published data on the impact of hospital preparedness on actual hospital performance during MCEs, or the appropriateness of mitigation and preparedness structures and processes on actual evacuations or efforts to shelter-in-place. Several researchers have recommended that hospitals adopt a more general all-hazards approach in their preparedness plans, supplemented with hazard-specific elements that account for facility-specific challenges (eg, decontamination and isolation).Reference Barbera, Yeatts and Macintyre 7 , Reference Adini, Laor, Cohen, Zadok and Bar-Dayan 8 , Reference Kaji and Lewis 15 The VA’s adaptable, all-hazards-based approach used in the CEMP process could serve as a potential model for hospitals outside of the VA because it would address these limitations. Even if health care personnel are trained, the SOPs for a generic emergency scenario could help them handle other emergencies.Reference Adini, Godlberg, Cohen and Bar-Dayan 9

The quantitative analyses of the two phases of CEMP data indicate an overall improvement in the level of hospital preparedness for each capability. These improvements might be due to lessons learned from Phase I recommendations, technical assistance and support provided by the OEM, or heightened awareness of what to expect during the second round of assessments. Regardless of the cause, the improvements represent improved preparedness in areas deemed critical for hospitals by emergency management practitioners and other experts.

The observed improvements underscore the importance of measurement because assessing these capabilities likely contributed to the observed improvements in preparedness between Phase I and Phase II. Measuring emergency preparedness in hospitals can lead to improvements by: (1) preventing the overuse, underuse, and misuse of resources for preparedness and response, and ensuring patient safety during and after disasters; (2) identifying what practices do and do not work in emergency management to drive improvement; (3) holding hospitals accountable for providing timely access to high-quality patient care during and after disasters; and (4) measuring and addressing disparities in how care is delivered during and after disasters.

These findings identified “low-performing” capabilities for 14 items, ranging from zero to three capabilities per MA (Tables 4A-4F). Various reasons may explain why some capabilities did not have the desired outcome. For some items, the expectations, standards, or guidelines were not fully developed at the time of the assessment. For example, item 4.2.6 (Maintaining Waste & Sewage Resiliency) was “low-performing” because space was not available at some facilities to provide complete back-up capability to maintain waste and sewage. Item 4.1.1 (Transporting Critical Staff to the Facility during Emergencies) also was recognized as a low-performing capability; as such, there have been efforts by VA leadership since the Phase II assessments to better publicize under what circumstances government vehicles may be used to transport staff during emergencies. Similarly, for item 2.1.4 (Management of Extended Incident Operations), efforts at some VAMCs had historically focused on the management of incidents in the short-term rather than long-term. Fully addressing this capability will require additional investments in facilities for space, training of leadership staff on the use of the Incident Command System, and awareness of the requirements of “proxy events” as part of exercise scenarios. Finally, two Program Management (MA 1) capabilities (1.7 and 1.11) were identified as areas that would require additional resources to improve performance. Other “low-performing” capabilities are receiving similar attention and review by the VA to determine remedies or steps for improvement. Furthermore, this “low-performing” capabilities analysis was used by OEM as part of its decision-making process when determining whether to approve requests from VAMCs for funds to make improvements in preparedness.

In addition to helping facilities identify areas of concern, these assessments exposed non-emergency management staff and hospital leadership to emergency management issues, and highlighted that emergency management is both a collaborative process and a shared responsibility. With regard to specific capabilities, there were items over which emergency managers had direct control (eg, alerting and warning systems, incident command, coordination and communications, and overall program structure and management) and there were others over which they had little direct control (eg, infrastructure resiliency, medical surge, on-site fire departments, research centers, access to cash, and home health care) and involved coordination with other departments within the facility or with outside community partners. This distinction highlights the importance of collaboration between different departments within a facility, as well as between the facility and outside agencies. These processes and relationships need to be established well in advance of an event to minimize disruptions to care.

Limitations/Future Assessments

The study had limitations. First, the tool has not been rigorously validated or thoroughly tested for reliability. Given the substantial input from content experts, including health system emergency managers, emergency physicians and nurses, engineers, infection control practitioners, safety officers, and leadership during its development, the VA CEMP hospital assessment tool has face validity and construct validity.Reference Manning and Goldfrank 32 The tool also appears to have some degree of reliability because the modifications between the two phases were relatively minor and VAMCs had consistent overall ratings between the two phases. However, the reliability of the tool cannot be assessed fully because inter-rater reliability among assessors was not evaluated systematically. It should be noted that the assessors did participate in the same trainings prior to assessing the VAMCs. Finally, although detailed guidelines were developed for the scoring rubric, assessors sometimes relied on achieving consensus for determining the final score for some capabilities. This process may have inappropriately introduced subjectivity into the scoring process, though there is no information to determine the extent to which this occurred or whether the scores would be systematically higher or lower without consensus scoring.

Although subsets of the 61 assessed capabilities (eg, support to external missions) are not applicable to non-VA facilities, most capabilities should be applicable to non-VA facilities. Ideally, assessments of facility preparedness would go beyond system and process measures and also link assessments to outcome measures. However, due to the uncommon and often unique nature of disasters, it may be difficult or impossible to establish such linkages. Accordingly, assessing performance during exercises and drills may be the best alternative. It is possible that large, integrated delivery systems like the VA may be able to assess outcomes in some areas. 30

It also should be noted that the degree to which a hospital is prepared for emergencies is dependent on several key factors, including the availability and flexibility of financial and human resources, organizational location, frequency of past emergencies and threat of seasonal emergencies, and overall organizational culture. The relationship between organizational culture and operational decisions has been the subject of studies by medical sociologists, providing evidence on the relationship between organizational culture and performance in a hospital settingReference Dobalian, Stein and Radcliff 33 which can be extended to how hospitals perform during major emergencies. The VA recognizes the value of emergency preparedness, evidenced through the VA Strategic Plan, Goal 3, Objective 3.5: Ensure preparedness to provide services and protect people and assets continuously and in time of crisis. This assessment process was designed to provide a formative assessment for VAMCs to use in improving their comprehensive emergency management programs, and by leadership to better understand the current status and strategic requirements for preparedness of the VA health care system.Reference Jacobs, Mannion, Davies, Harrison, Konteh and Walshe 34 The VA began to collect data for Phase III of the assessment program in 2015, which will continue the process and program to assess hospital response capabilities. Before Phase III of the CEMP assessment was fielded, the metrics and processes from earlier phases were evaluated critically and refined. As such, the program has been able to improve the assessment tool and processes in a dynamic framework. The all-hazards-based tool used to assess hospital preparedness in this study was derived from generally accepted standards and, with some modification, could be adapted for non-VA hospitals. The need for such a tool is particularly apparent at a time when the Centers for Medicare and Medicaid Services (CMS; Baltimore, Maryland USA) has issued a rule to enact new preparedness requirements on Medicare- and Medicaid-participating health care providers, including hospitals. 35

Conclusion

These findings from quantitative analysis of CEMP assessments for two phases indicated an overall improvement in hospital preparedness scores over time, which reinforces the value of conducting such assessments. The lack of consensus on how to measure hospital preparedness remains problematic since there are no consistent standards that can be applied to all hospitals to ensure different institutions are reporting equivalent measures. More studies are needed to create valid and reliable measures that can be applied to all hospitals, but the CEMP offers one model that has been implemented and refined for use in VA facilities with potential applicability to non-VA hospitals.

References

1. Sauer, LM, McCarthy, ML, Knebel, A, Brewster, P. Major influences on hospital emergency management and disaster preparedness. Disaster Med Public Health Prep. 2009;3(1Suppl):S68-S73.Google Scholar
2. Buyman, A, Dubruiel, N, Torghele, K, Alperin, M, Miner, KR. Can summits lead to curricula change? An evaluation of emergency preparedness summits of schools of nursing in Georgia. J Contin Educ Nurs. 2009;40(5):210-215.Google Scholar
3. Ferrer, RR, Ramirez, M, Sauser, K, Iverson, E, Upperman, JS. Emergency drills and exercises in health care organizations: assessment of pediatric population involvement using after-action reports. Am J Disaster Med. 2009;4(1):23-32.Google Scholar
4. Levy, LA, Rokusk, CF, Bragg, SM, Howell, JT. Interdisciplinary approach to all-hazards preparedness: are you ready? How do we know? J Public Health Manag Pract. 2009;15(2Suppl):S8-S12.Google Scholar
5. Wang, C, Wei, S, Xiang, H, et al. Development and evaluation of a leadership training program for public health emergency response: results from a Chinese study. BMC Public Health. 2008;8:377.Google Scholar
6. Barbera, JA, Yeatts, DJ, Macintyre, AG. Challenge of hospital emergency preparedness: analysis and recommendations. Disaster Med Public Health Prep. 2009;3(2Suppl):S74-S82.Google Scholar
7. Adini, B, Laor, D, Cohen, R, Zadok, R, Bar-Dayan, Y. Assessing levels of hospital emergency preparedness. Prehosp Disaster Med. 2006;21(6):451-457.Google Scholar
8. Adini, B, Godlberg, A, Cohen, D, Bar-Dayan, Y. Evidence-based support for the all-hazards approach to emergency preparedness. Isr J Health Policy Res. 2012;1(1):40.Google Scholar
9. Adini, B, Laor, D, Hornik-Lurie, T, Schwartz, D, Aharonson-Daniel, L. Improving hospital mass casualty preparedness through ongoing readiness evaluation. Am J of Med Quality. 2012;27(5):426-433.Google Scholar
10. Kaji, A, Langford, V, Lewis, RJ. Assessing hospital disaster preparedness: a comparison of an on-site survey, directly observed drill performance, and video analysis of teamwork. Ann Emerg Med. 2008;52(3):195-201.Google Scholar
11. Jenkins, JL, Kelen, GD, Sauer, LM, Fredericksen, KA, McCarthy, ML. Review of hospital preparedness instruments for National Incident Management System compliance. Disaster Med Public Health Prep. 2009;3(1Suppl):S83-S89.Google Scholar
12. McCarthy, ML, Brewster, P, Hsu, EB, Macintyre, AG, Kelen, G. Consensus and tools needed to measure health. Disaster Med Public Health Prep. 2009;3(1Suppl):S45-S51.Google Scholar
13. Adini, B, Goldberg, A, Cohen, R, Bar-Dayan, Y. Relationship between standards of procedures for pandemic flu and level of hospital performance in simulated drills. Ann Emerg Med. 2008;52(3):222-229.Google Scholar
14. Kaji, AH, Lewis, RJ. Hospital disaster preparedness in Los Angeles County. Academic Emerg Med. 2006;13(11):1198-1203.CrossRefGoogle ScholarPubMed
15. Macintyre, AG, Barbera, JA, Brewster, P. Health care emergency management: establishing the science of managing mass casualty and mass effect incidents. Disaster Med Public Health Prep. 2009;3(1Suppl):S52-S58.Google Scholar
16. Higgins, W, Wainright, C, Lu, N, Carrico, R. Assessing hospital preparedness using an instrument based on the mass casualty disaster plan checklist: results of a statewide survey. Am J of Infection Control. 2004;32(6):327-332.CrossRefGoogle ScholarPubMed
17. Hick, J, Koenig, K, Barbisch, D, Bey, T. Surge capacity concepts for health care facilities: the CO-S-TR model for initial incident assessment. Disaster Med Public Health Prep. 2008;2(1Suppl):S51-S57.Google Scholar
18. Hick, J, Barbera, J, Kelen, G. Refining surge capacity: conventional, contingency, and crisis capacity. Disaster Med Public Health Prep. 2009;3(1Suppl):S59-S67.Google Scholar
19. Hick, J, Christian, M, Sprung, C. Surge capacity and infrastructure. Intensive Care Med. 2010;36(1Suppl):S11-S20.Google Scholar
20. Hick, J, Hanfling, D, Cantrill, S. Allocating scarce resources in disasters: emergency department principles. Ann Emerg Med. 2012;59(3):177-187.Google Scholar
21. Kaji, A, Koenig, K, Bey, T. Surge capacity for health care systems: a conceptual framework. Ann Emerg Med. 2006;13(11):1157-1159.Google Scholar
22. Legemaate, GAG, Burkle, FM Jr., Bierens, JJLM. The evaluation of research methods during disaster exercises: applicability for improving disaster health management. Prehosp Disaster Med. 2012;26(6):1-9.Google Scholar
23. Ingrassia, P, Prato, F, Geddo, A, et al. Evaluation of medical management during a mass casualty incident exercise: an objective assessment tool to enhance direct observation. J Emerg Med. 2010;39(5):629-636.Google Scholar
24. Kaji, A, Lewis, R. Assessment of the reliability of the Johns Hopkins/Agency for Healthcare Research and Quality Hospital Disaster Drill Evaluation Tool. Ann Emerg Med. 2008;52(3):204-210.Google Scholar
25. Association of State and Territorial Health Officials. The National Health Security Preparedness Index: a new way to measure and advance our nation’s preparedness. http://www.nhspi.org/. Accessed July 16, 2014.Google Scholar
26. Barbisch, D, Koenig, K. Understanding surge capacity: essential elements. Academic Emerg Med. 2006;13(11):1098-1102.Google Scholar
27. Webster, PJ, Holland, GJ, Curry, JA, Chang, HR. Changes in tropical cyclone number, duration, and intensity in a warming environment. Science. 2005;309(5742):1844-1846.Google Scholar
28. Hay, J, Mimura, N. The changing nature of extreme weather and climate events: risks to sustainable development. Geomatics, Natural Hazards and Risk. 2010;1(1):3-18.Google Scholar
29. United States Agency for International Development (USAID). http://www.usaid.gov/what-we-do/global-health/pandemic-influenza-and-other-emerging-threats. Accessed July 16, 2014.Google Scholar
30. Dobalian, A, Callis, R, Davey, VJ. Evolution of the Veterans Health Administration’s role in emergency management since September 11, 2001. Disaster Med Public Health Prep. 2011;5(Suppl2):S182-S184.Google Scholar
31. Manning, F, Goldfrank, L. (eds). Preparing for Terrorism: Tools for Evaluating the Metropolitan Medical Response System Program. Washington, DC USA: National Academy of Sciences; 2002.Google Scholar
32. Dobalian, A, Stein, JA, Radcliff, TA, et al. Developing valid measures of emergency management capabilities within US Department of Veterans Affairs Hospitals. Prehosp Disaster Med. 2016;31(5):475-484.Google Scholar
33. Jacobs, R, Mannion, R, Davies, HTO, Harrison, S, Konteh, F, Walshe, K. The relationship between organizational culture and performance in acute hospitals. Social Science Med. 2013;(76):115-125.Google Scholar
34. United States Department of Veterans Affairs, Veterans Health Administration (VHA). Blueprint of Excellence. VHA 9/21/2014. http://www.va.gov/ HEALTH/docs/VHA_Blueprint_for_Excellence.pdf. Accessed July 16, 2015.Google Scholar
35. Emergency Preparedness Standards for Medicare and Medicaid Participating Providers and Suppliers: A Rule by CMS on 9/16/2016. https://www.federalregister.gov/documents/2016/09/16/2016-21404/medicare-and-medicaid-programs-emergency-preparedness-requirements-for-medicare-and-medicaid. Accessed September 30, 2016.Google Scholar
Figure 0

Table 1 Detailed Description of Six CEMP Mission Areas32

Figure 1

Table 2 Number of Capabilities Included in Each Phase and in the Analysis

Figure 2

Figure 1A Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 1 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 3

Figure 1B Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 2 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 4

Figure 1C Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 3 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 5

Figure 1D-1 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.1 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 6

Figure 1D-2 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.2 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 7

Figure 1D-3 Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 4.3 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 8

Figure 1E Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 5 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 9

Figure 1F Percent of VAMCs Developed, Became Developed, Never Developed, and Became Undeveloped Between Two Phases for MA 6 Capabilities. Abbreviations: MA, Mission Area; VAMC, Veterans Affairs Medical Center.

Figure 10

Table 3 Number of Capabilities by Percent of VAMCs for Two Desired Outcomes (Stayed/Became Developed)

Figure 11

Table 4A Mission Area 1 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Figure 12

Table 4B Mission Area 2 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Figure 13

Table 4C Mission Area 3 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Figure 14

Table 4D Mission Area 4.1, 4.2, 4.3 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Figure 15

Table 4E Mission Area 5 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)

Figure 16

Table 4F Mission Area 6 Capabilities - Percent of VAMCs with Desired Outcome (Stayed/Became Developed)