During the past decade, the US government has developed strategies such as the National Response Framework and the National Incident Management System to facilitate a unified and integrated all-hazards approach to emergency management in this country.Reference Sauer, McCarthy and Knebel1 To be relevant to all stakeholders and sectors of society, these national strategies are laid out in generic terms. Health care organizations are an integral component of emergency and disaster response,2, 3 but the national strategies lack specificity to provide practical, operational guidance for health care organizations. More detailed guidance has been provided, but even with this a general consensus as to usefulness or appropriate application is lacking.4
To improve health care emergency management in this country, a first critical step is to reach agreement on what are the essential components of health care emergency management. This should include the development of what health care organizations are to prepare for, mitigate against, respond to, and recover from emergencies and disasters. The term healthcare emergency management capabilities (HEMCs) is used in this article to be consistent with usage at other organizations (eg, the Department of Homeland Security). A capability is defined as the ability to perform an action or generate an outcome.
Once a health care emergency management framework is defined and agreed upon, the next step is to develop a standardized approach to evaluate the real or exercised capabilities of health care organizations. Instruments to be created must be reliable and valid and have applicability during real responses to emergency incidents. Evaluative measures and metrics should address both preparedness and response capabilities. Other characteristics of the evaluation process require attention as well, such as selection and preparation of evaluators and scoring approaches. This brief description only superficially highlights the complexities surrounding the ability to determine what health care organizations are capable of doing during emergency or disaster response. The goal of such a process is to improve HEMCs of health care facilities across the United States.
This article examines several well-known efforts at describing HEMCs and related evaluative processes for health care organizations. First, HEMCs proposed by different key agencies are reviewed to examine consistency and to determine whether a core set of capabilities can be identified. Second, a summary is provided of different approaches being used to evaluate HEMCs. Finally, 2 tools are discussed that highlight the lack of formal research into how performance-based evaluation is done in this field. This article is intended to shed light on the strategic direction that the nation should take to improve the collective health care organizational response to disasters and emergencies.
HEMCs
Five organizations have produced materials that are relevant to the examination of HEMCs for health care organizations: the Veterans Health Administration (VHA), The Joint Commission (TJC), the Institute of Medicine (IOM), the Department of Homeland Security (DHS), and the Department of Health and Human Services (DHHS). These 5 were chosen because of the influence they have to shape the future direction of health care emergency management in this country.
VHA has developed a comprehensive emergency management assessment program that is applicable to any health care system.5 The initial step of VHA was to define an emergency management framework that consists of 6 major capabilities (Table 1) and 69 capability-specific elements (not shown). The 6 major capabilities reflect VHA’s priorities of any health care organization in emergency management. VHA posits the first priority of a health care organization during response is the safety and security of its occupants (ie, staff, patients, families, and others). The second priority is the ability to maintain continuity of services in patient care and business. The third and fourth priorities are medical surge and support to external entities, if conditions allow. The 2 other major capabilities identified by VHA—incident management and the emergency management program—sustain and enhance the above response capabilities.
TABLE 1 Target Capabilities Identified by Leading Agencies
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160713135536-77226-mediumThumb-S1935789300002032_tab1.jpg?pub-status=live)
The HEMCs of TJC are based on the 2009 performance standards.6 TJC has identified 6 critical emergency response areas (ie, communication, resources and assets, safety and security, staff including volunteers, utilities, and patient care activities) that are represented by 8 performance standards and 4 standards that support them (ie, emergency management planning, the emergency operations plan, emergency management evaluation of planning, and the emergency operations plan; Table 1).
The IOM capabilities were identified by an IOM committee that developed performance measures and evaluation methods for DHHS so that it could evaluate the preparedness of cities participating in the Metropolitan Medical Response System (MMRS) program.7 Although the charge to the IOM committee was to develop a set of performance measures for the MMRS program, much of their work is relevant to health care organizations and so is included in this article. The IOM committee identified 23 essential response capabilities that they considered critical actions for effective disaster response (Table 1).
DHS identified 37 generic capabilities that define preparedness at the local community level and are applicable to all sectors of society.8 In this article, we have included those capabilities related to the health care sector as major HEMCs in Table 1. Table 1 also includes DHS capability-specific elements that are part of the medical surge exercise evaluation guide of the Homeland Security Exercise and Evaluation Program (HSEEP).9
Finally, the DHHS HEMCs are those identified as the objectives of the Hospital Preparedness Program coordinated by the Office of the Assistant Secretary for Preparedness and Response.10
The HEMCs identified by the agencies originate from different conceptual foundations. VHA, TJC, and the IOM have developed their capabilities based on emergency management principles of mitigation, preparedness, response, and recovery. In contrast, DHHS focuses on preparedness and response and DHS has created its own taxonomy related to terrorism (ie, prevention, protection, response, and recovery). Another important difference among the agencies is that VHA, TJC, and DHHS capability frameworks target the individual hospital and its support to the community, whereas IOM and DHS target health care capabilities at the community response level (hence the capability-specific elements related to mass care provided by these 2 organizations).
Table 1 summarizes the major HEMCs identified by the different agencies. Due to the terminology and taxonomy differences among the various efforts, Table 1 has been arranged to list capabilities in order of how frequently they are identified as a major capability by the agencies themselves. For example, TJC has 12 performance standards and a number of elements of performance related to each performance standard. In Table 1, the performance standards are categorized as major capabilities and the elements of performance as capability specific.
In Table 1, capabilities identified as major by 2 agencies or fewer are listed as capability-specific elements under another major capability if conceptually appropriate. For example, environmental health was identified as a major capability by 2 agencies and is related to occupant safety and security, so it was categorized under it. In 2 instances, an agency identified 2 major capabilities that the other agencies included as 1, so they were combined in Table 1. DHHS has 2 major capabilities for integration of efforts across different medical and public health response entities and coordination across all response tiers. In Table 1, they are combined as integration and support to external entities. We also combined 2 TJC performance standards related to the management of volunteers (1 for licensed practitioners and the other for those not licensed).
Occupant safety and security and continuity of operations were identified as major capabilities by 4 of the 5 agencies. There were an additional 5 capabilities identified as major by 3 agencies (Table 1). An example of consistency at the capability-specific level was the identification of personal protective equipment by all 5 agencies. In general, the concepts are consistent and the most common difference among the agencies is whether a capability is identified as major versus an element of a major capability. For example, 3 agencies identified management of volunteers as a major capability and the other 2 as an element. VHA’s framework identifies the management of volunteers as a capability-specific element related to medical surge. DHHS is the only agency to identify preparing for the needs of at-risk individuals as a major capability; VHA and TJC recognize it as a capability-specific element.
The VHA framework places more importance on a systems-based approach to emergency management compared with the other agencies. In the VHA model, it is not sufficient to have an emergency management committee or an emergency operations plan. Rather, a health care facility must have an emergency management program that organizes all emergency management activities within a single system. TJC also places importance on emergency management as reflected by 4 standards related to emergency management planning and evaluation.
In summary, the conceptual framework of health care emergency management is critical. Seven capabilities were identified by at least 3 of the 5 agencies. When the agencies differed, most often the issue was not whether a capability should be included but rather whether it should be considered a major capability versus a capability-specific element. VHA and TJC have developed the most detailed emergency management frameworks for hospitals and their frameworks overlap considerably. These results suggest that consensus on a framework is obtainable, one that can be used by all health care facilities to guide their disaster preparedness and response efforts.
APPROACHES TO THE EVALUATION OF HEMCs
This section presents the 5 agencies’ performance-based approach to the measurement of HEMCs. Self-administered and reported surveys are discussed elsewhere in this issue.Reference Jenkins, Kelen and Sauer11 To measure the HEMCs of its hospitals, VHA has developed a formative evaluation program that seeks to improve and appraise hospitals’ capabilities simultaneously. This dual purpose is reflected in key characteristics of the program. First, the evaluation is carried out by an independent, multidisciplinary assessment team.5
Second, VHA relies primarily on a scheduled, onsite evaluation to assess its facilities’ HEMCs. The team evaluates each facility’s capabilities based on data collected from the following sources during the onsite evaluation: interviews with key personnel, facility tours of functional units that are important to the emergency management program, review of key documents, 1 tabletop exercise, and several capability demonstrations.5
Third, VHA relies on a standardized assessment process that includes a uniform set of questions, tabletop exercises, and capability demonstrations; explicit evaluation criteria; and a formalized grading system. For each of the 69 capability-specific elements, the VHA developed a standard set of questions that the assessment team asks during the onsite evaluation. In addition to the self-reported data from the interview questions, VHA also evaluates the capability-specific elements using the following standardized criteria: Are there policies that support the capability? Are there resources to maintain or enable the capability? Are there processes to address or support the capability? Has education and training related to the capability been provided to staff? Have exercises and activities been conducted to promote the capability? Has the capability been evaluated and results reported by the facility? Is there evidence of organizational learning and process improvement as a result of exercises and evaluation. Table 2 illustrates how VHA uses these 7 criteria to evaluate 1 capability-specific element.5
TABLE 2 Criteria Used by Veterans Health Administration to Evaluate Health Care Facilities’ Interface With Community Health Care Organizations During an Emergency
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160713135536-31072-mediumThumb-S1935789300002032_tab2.jpg?pub-status=live)
VHA uses the information gathered from all of these sources to grade each of the 69 capability-specific elements on a 5-level ordinal scale ranging from 0 (needs attention) to 4 (exemplary). Table 3 shows the grading scale for a facility’s response interface with community health care organizations. The assessment team completes grading through a consensus process. The assessment team compiles the ratings for the 69 capability-specific elements, generates a final report, and then reviews the report with key personnel from the facility. In addition to the ratings, the final report identifies major strengths and weaknesses of the facility by highlighting 3 capabilities rated as exemplary and 3 considered in need of attention.5 Finally, the organization being evaluated is provided with recommendations and access to information to help it address any deficiencies noted.
TABLE 3 Measurement Scale Used by Veterans Health Administration to Grade Health Care Facilities’ Interface With Community Health Care Organizations During an Emergency
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160713135536-97848-mediumThumb-S1935789300002032_tab3.jpg?pub-status=live)
TJC also relies on an onsite visit to evaluate the HEMCs of hospitals and collects data from similar sources as VHA at the organization being evaluated, but TJC’s approach differs from VHA in the following ways: The TJC visit is unannounced and can occur on any day at any time. The emergency management assessment is performed by a single member of the team and usually is brief (1–3 hours). The scoring elements are not as well defined and the majority are graded dichotomously (yes vs no) without clear definition of compliance.
The IOM committee made 2 key recommendations for the evaluation of the MMRS program. First, it recommended that the evaluation focus on input, process, and output indicators of HEMCs (rather than outcomes) because these can be commonly assessed. Inputs are the constituent parts of a deliverable such as personnel or equipment. Processes are actions taken to support the capability such as training staff or development of a memorandum of understanding. Outputs are capabilities that result from the input and processes such as demonstration of critical skills in tabletop exercises or drills.7 Second, the committee recommended that to achieve a comprehensive and valid assessment, the evaluation should rely on multiple types of measures that can be collected from a variety of sources including peer review interviews, written documentation, surveys, and exercises.7 In producing a strategic document, the IOM committee did not specify who should do the evaluation nor did they develop a system for grading the capabilities.
One of the mechanisms DHS uses to evaluate HEMCs is exercises. In an effort to assist states, tribes, communities, and organizations in conducting exercises, the DHS has developed the HSEEP, which is a capabilities- and performance-based exercise program that provides standardized policy, terminology, and methodology for exercise design, development, conduct, evaluation, and improvement planning.9
DHS has developed template exercise evaluation guides for 35 of their 37 target capabilities.9 The exercise evaluation guide developed for medical surge capacity requires the evaluator to observe the drill participants, check off tasks completed from a standardized list (and record time of completion, if applicable), write a chronological narrative of observed responder actions, and identify 3 areas of strength and 3 areas in need of improvement. The evaluator uses the documentation provided in the exercise evaluation guide to write an after-action report. Although the HSEEP was not developed specifically for health care organizations, the principles and reference documents are one resource that health care organizations can use and adapt to their own purposes.9
Finally, DHHS has developed a number of performance measures at the state and hospital levels that it uses to evaluate the effectiveness of the Hospital Preparedness Program (HPP). These performance measures are based on data collected from site visits, surveys, and exercises. For example, in fiscal year 2008, one of the capabilities that states were required to develop was an operational bed tracking system using a standardized system. DHHS evaluates this capability-specific element at the state level by determining how many participating states’ emergency operations centers can report available beds for at least 75% of participating hospitals (using the formal definitions) within 4 hours after an inquiry from the Secretary’s Operation Center in Washington, DC. At the hospital level, the DHHS evaluates whether a hospital can report its number of available beds to the state’s emergency operations center within 1 hour of a request.12
In summary, the different organizations are using a variety of strategies to evaluate HEMCs. The major strengths of the VHA evaluation approach are that they use a multidisciplinary evaluation team and have standardized many aspects of the evaluation process. A unique strength of TJC is that it conducts unannounced evaluations that make it harder for organizations to rely on just-in-time preparations. The VHA and TJC rely more heavily on the results of onsite interviews and review of documentation than they do on performance during exercises. DHS encourages organizations to more frequently incorporate exercises into their training and evaluation activities and has developed HSEEP to facilitate this. Finally, DHHS has taken performance-based evaluation 1 step further and developed capability benchmarks so that it can simultaneously assess the effectiveness of its program and improve the medical response capabilities of communities. Although more costly, the optimal approach as recommended by the IOM committee is to base performance on multiple types of measures.
Existing Tools That Measure HEMCs
Evaluation is critical to the improvement process, yet few tools have been developed to measure the HEMCs of health care organizations. Some of the agencies described in this article have tools for evaluating HEMCs, but none of them have been scientifically vetted. This section describes the available hospital-based tools and summarizes the evidence of their reliability and validity.
An integral component of the evaluation of HEMCs is an assessment of important preparedness documents, such as a facility’s hazard vulnerability assessment or emergency operations plan. Adini et alReference Adini, Goldberg and Cohen13 developed a tool to evaluate the quality of standard operating procedures (SOPs) for pandemic influenza. Their tool consists of 31 items that measure 7 dimensions of SOPs (eg, manpower, protecting staff and patients, detection and identification). They weighted each item based on expert ratings of its importance to the management of a pandemic flu outbreak. A team site visited all 24 general hospitals in Israel and evaluated the quality of each hospital’s SOPs for pandemic flu according to the SOP instrument. Only 2 of the 7 subscales of the SOP instrument demonstrated adequate internal consistency reliability. In addition, 2 of the subscales did not demonstrate adequate item–total correlations.Reference Adini, Goldberg and Cohen13 These results suggest that their SOP instrument is not highly reliable, but it does provide a model for evaluation of the evaluation process itself.
Adini et alReference Adini, Goldberg and Cohen13 also examined the validity of the SOP instrument by examining its relation to performance on a simulated drill for avian flu. They developed a drill checklist that consisted of the same 7 dimensions as the SOP instrument. A pair of observers evaluated each of the 24 hospitals’ drill performance using the checklist. All but 2 of the drill checklist’s subscales demonstrated adequate internal consistency reliability and all of the items were reasonably homogenous as measured by their item–total correlations. In terms of validity, however, the scores from only 3 of the 7 subscales were significantly correlated between the SOP tool and the drill performance checklist.Reference Adini, Goldberg and Cohen13 It is unclear whether the lack of correlation found between the majority of the scales is because of problems with 1 or both of the newly developed instruments.
Another performance-based evaluation tool is the Johns Hopkins University/Agency for Healthcare Research and Quality (JHU/AHRQ) hospital disaster drill evaluation tool.Reference Jenckes, Catlett and Hsu14 Kaji and LewisReference Kaji and Lewis15 evaluated the reliability and validity of the tool during a bomb explosion drill conducted at 6 hospitals. Different pairs of medical students independently rated each hospital’s drill performance. The investigators found that the internal consistency reliability of the instrument’s scales was good, but the interrater reliability among the paired observers was not; however, the investigators relied on medical students who had only completed a 4-hour training session. Thus, it is not clear whether the low interrater reliability is a result of inadequate experience or training by the raters, the instrument itself, or a combination of the 2.Reference Kaji and Lewis15 A revised and briefer version of the JHU/AHRQ tool that incorporates end-user response issues has recently been released.16
During the same bomb explosion drill, the investigative team also examined the relation of the JHU/AHRQ drill evaluation tool to an onsite survey of emergency preparedness and a video analysis of teamwork.Reference Kaji, Langford and Lewis17 Because the teamwork analysis focused on performance in the incident command center, the investigators only examined the relation between the incident command center module of the JHU/AHRQ tool and the onsite survey and video analysis. They found no relation between the onsite survey and the drill evaluation tool, but they did find a strong relation between drill performance and video analysis of teamwork. The lack of association between the onsite survey and the incident command module of the JHU/AHRQ tool may be a result of the small sample size and generic nature of the onsite survey or may indicate that revisions of the drill evaluation tool are needed.
In summary, there have been few efforts to critically examine performance-based instruments for measuring HEMCs. Significant methodological challenges need to be addressed in the further development and validation of evaluative instruments for emergency management, including the selection and preparation of evaluators, an adequate and nonbiased sample size, and selection of appropriate validation criteria. Without instruments that reliably and validly measure HEMCs, it will not be possible to compare across facilities, to quantify improvement over time, to evaluate the impact of an intervention on a facility’s capabilities, or to conduct cost–benefit analyses. This is an important area of research and would benefit from collaborative work among experts in instrument development and evaluation, emergency management, and clinical operations.
Conclusions
The results of this study suggest that there is relatively good conceptual consensus on what health care organizations are expected to do following a disaster and how to prepare. It is time to reach consensus on a national framework for HEMCs so that subsequent efforts can focus on evaluation and improvement. It is recommended that any further attempt to develop evaluative tools be slowed until a common and accepted capability framework is developed. All of the organizations presented here agree that a wide variety of measures and data sources is necessary to evaluate HEMCs. The challenge remains to develop a rigorous and scientifically based set of evaluation tools and strategies that health care facilities across the country can use to improve their emergency management capabilities.
Authors' Disclosures
The authors report no conflicts of interest.