Introduction
Since the terrorist attacks on the United States in 2001, Hurricane Katrina in 2005, and Hurricane Sandy in 2012, numerous efforts have been undertaken by The Joint Commission (Oakbrook Terrace, Illinois USA), the United States Department of Homeland Security (Washington, DC USA), the US Department of Health and Human Services (Washington, DC USA), and others to help hospitals become better prepared for major disasters.Reference Dobalian, Stein, Radcliff, Riopelle, Brewster, Hagigi and Der-Martirosian 1 These efforts have included numerous approaches to help hospitals build comprehensive emergency management systems, including: the continuing education of health care professionals;Reference Sauer, McCarthy, Knebel and Brewster 2 the compilation of lessons learned from disaster exercises or emergency drills;Reference Buyum, Dubruiel, Torghele, Alperin and Miner 3 the implementation of multidisciplinary, community-focused, emergency preparedness training programs;Reference Ferrer, Ramirez, Sauser, Iverson and Upperman 4 , Reference Levy, Rokusek, Bragg and Howell 5 and the development of objectively measured preparedness capabilities.Reference Wang, Wei and Xiang 6 , Reference Barbera, Yeatts and Macintyre 7
Despite these laudable efforts, there remains limited information regarding what constitutes effective emergency preparedness,Reference Adini, Goldberg, Laor, Cohen, Zadok and Bar-Dayan 8 and there is no widely-accepted, validated tool for evaluating hospital emergency preparedness.Reference Barbera, Yeatts and Macintyre 7 , Reference Adini, Goldberg, Cohen, Laor and Bar-Dayan 9 , Reference Adini, Laor, Hornik-Lurie, Schwartz and Aharonson-Daniel 10 Although several instruments for evaluating elements of preparedness exist, they are tested rarely for reliability and validity.Reference Adini, Laor, Hornik-Lurie, Schwartz and Aharonson-Daniel 10 - Reference Zhong, Clark, Hou, Zang and FitzGerald 16 Furthermore, there is considerable redundancy in the protocols and tools that exist for assessing a hospital’s emergency management capabilities.Reference Kaji, Langford and Lewis 11 , Reference Jenkins, Kelen, Sauer, Fredericksen and McCarthy 12 Accordingly, achieving and maintaining a high level of preparedness remains a major challenge for health care institutions, particularly hospitals.
A hospital’s emergency management capabilities typically are conceptualized into two broad categories: (1) the hospital’s emergency management plan; and (2) the hospital’s surge capacity. Under this approach, the emergency management plan contains specifics about how the health care system will respond, continue to function, and adapt to an emergency.Reference Jenkins, Kelen, Sauer, Fredericksen and McCarthy 12 , Reference Ingrassia, Prato and Geddo 17 - Reference Macintyre, Barbera and Brewster 20 In contrast, surge capacity refers to a hospital’s potential to expand patient care capabilities to meet the increased medical needs of a community during a mass-casualty event, or other disaster, which would overload normal daily operations.Reference Higgins, Wainright, Lu and Carrico 21 - Reference Hick, Hanfling and Cantrill 25
The hospital preparedness literature describes numerous challenges to evaluating hospital emergency preparedness, including the need for consistent benchmarks to ensure that different institutions are reporting equivalent measures. For example, in responding to a survey question about the number of available beds, one hospital may include the number of conventional everyday beds, while another hospital may include assorted contingency beds, such as those that could be made available in an emergency by accelerating the discharge process.Reference Hick, Koenig, Barbisch and Bey 22
The need for hospital preparedness benchmarks extends to the US Department of Veterans Affairs (VA; Washington, DC USA), the largest fully-integrated health care system in the US. The VA includes VA Medical Centers (VAMCs) across the nation and has a mission to serve veterans and their communities when emergencies occur.Reference Kaji, Koenig and Bey 26 This study reports on the development of this hospital preparedness assessment tool for VAMCs. The authors evaluated hospital preparedness in six proposed domains or “Mission Areas” (MAs), each composed of numerous observable hospital preparedness capabilities. This paper reports on two successive assessments (Phase I and Phase II) of the six MAs’ construct validity, or the degree to which component capabilities relate to one another to represent the associated domain successfully. This study of the assessed measures represents an initial step in creating a reliable and valid measure of hospital preparedness.
Methods
This study reports the results of a confirmatory factor analysis (CFA) using data on the emergency capabilities, or “all-hazard preparedness,” of all 140 VAMCs in the US. No VAMCs were excluded from the study. Data were collected by a team of experts who travelled to each VAMC and conducted an assessment of each hospital’s emergency readiness through observation, demonstration, and interviews with key staff. Data were collected during two phases, first in 2008-2010 and then in 2011-2013. A CFA was used to assess the construct validity of the six domains that were believed to represent the emergency readiness of a VAMC.
Protocol Development and Assessment Process
One of the first steps in VA’s efforts to assess its “all-hazard preparedness” was a 2004 survey of VAMCs. Data were collected using a questionnaire modified from a tool developed for the US Agency for Healthcare Research and Quality (Rockville, Maryland USA) and the US Health Resources Services Administration (Rockville, Maryland USA) to assess hospital preparedness. Findings from this survey then were combined with a review of the relevant literature, an examination of pertinent industry and governmental standards and guidelines, and consultation with subject-matter experts to develop VA-sponsored protocols and tools designed to assess the Comprehensive Emergency Management Program (CEMP) at each VAMC.
Following this review, the CEMP study team developed the hospital preparedness protocol and assessment methodology by: (1) identifying the essential components of a hospital-based CEMP; (2) determining capabilities associated with these essential components; (3) describing capabilities through a descriptive framework; (4) determining capability measurement and data collection processes and tools; and (5) piloting the process using trained assessment teams. The development of the protocol was assisted at each stage by a technical expert panel and two steering committees. Technical experts had expertise in emergency management, emergency medicine, hospital operations, hospital administration, nursing, medicine, engineering, or safety, and were included through four technical expert panel meetings with additional consultation on an ad hoc basis. The first steering committee consisted of VA personnel from various VAMCs, regional VA offices, or VA headquarters (Washington, DC). The second steering committee consisted of representatives from several of VA’s federal partners, including the Department of Health and Human Services, the Department of Homeland Security, and the Department of Defense (Arlington, Virginia USA). The purpose of the second “federal partners” steering committee was to ensure consistency across agencies with preparedness missions. The CEMP study team met weekly with both steering committees.
Identification of Essential Components of a CEMP
The CEMP study team began by identifying the most critical missions of an emergency management program and the capabilities associated with those missions. The six critical MAs included: Program Management; Incident Management; Safety and Security; Resiliency and Continuity; Medical Surge; and Support to External Requirements (Table 1).
Abbreviations: Comprehensive Emergency Management Program; CFA, confirmatory factor loadings; LF, Low Factor Loading (<.3); M, Too Many Missing; MA, Mission Area; NI, Not Included.
Assessment Tools and Processes
The assessment protocol represents a “hospital target capability list” consisting of two levels of capabilities: program level and emergency operations level. The program level capabilities are those activities conducted on an on-going basis to develop, maintain, and evaluate the overall CEMP (mitigation, preparedness, response, and recovery activities). The emergency operations level capabilities are those activities commonly required during the response and recovery to any type of emergency. These were organized using the priorities of any organization in emergency operations: incident management; occupant safety; continuity of operations or resiliency; expansion of service (surge); and support to external requirements.
The Phase I data collection protocol and tools were pilot tested at two VAMC sites in 2008. Modifications were made after each site visit; data collection occurred from mid-2008 to 2010 and included 140 VAMCs. The Phase II assessment (2011-2013) was conducted on the same 140 VAMCs using a similar, but modified, protocol. In Phase II, seven new items were added, five were removed, and four were re-worded. Two capabilities were re-classified; both items originally were included in MA 4, but one was re-classified to MA 1 and the other was re-classified to MA 2.
Data on the VAMCs were collected by three-person assessment teams. The assessment team leader was someone with leadership experience in the hospital setting and was often a prior, but not current, VA employee. The second assessor was required to have at least ten years of experience in a clinical or engineering field. The third team member, the VA liaison, was selected by VA and provided VA-specific context to the other team members. The VA liaison was a current VA employee and was generally an emergency manager from another VAMC or an emergency manager from VA headquarters. An initial 12-hour training session was conducted in April 2008 for all assessment team members and monthly in-service sessions provided updates. The specific individuals who made up each site visit team varied across the assessed VAMCs.
Data Collection
The assessments used multiple data collection approaches. First, data were collected by a pre-site visit questionnaire. The pre-survey used National Fire Protection Association (Quincy, Massachusetts USA) 1600 (Standard on Disaster/Emergency Management and Business Continuity Programs) as an organizing framework. In addition, data were collected during a four-day, on-site visit to each VAMC by: (1) observation during a facility tour, capability demonstrations, and tabletop exercises; (2) individual and group interviews of key staff and leadership personnel; and (3) a review of emergency management-related documents. The site visit began with an opening conference to ensure leadership and key staff understood the site visit objectives and ended with a closing conference to present the assessment team’s observations on strengths and weaknesses. The tabletop exercise was conducted on the final day of the visit with response to a hazard relevant to the facility visited, as identified in their Hazard Vulnerability Assessment. Based on these data, the assessment team would complete a scoring tool for each VAMC. The Phase I assessment tool included 69 VAMC capabilities. The Phase II assessment tool included 71 capabilities.
The assessment teams used a standardized site visit agenda, interview questions, and scoring tool. For each session in the site visit agenda, there was a lead assessor who conducted the interviews, capability demonstrations, and tabletop exercises. At least one other assessor was present, and both individuals recorded their notes from the discussions. Each evening during the site visit, the assessment team leader conducted a scoring session where the assessment team presented their findings and all team members had input into the final score for each capability.
Analysis
Analyses of the Phase I and Phase II data were performed in a similar fashion. The CFAs were performed using the EQS structural equations program (Encino, California USA)Reference Dobalian, Callis and Davey 27 to test the associations among the items (or capabilities) hypothesized to belong within the six MAs (and MA sub-scales in Phase II). Each MA was tested separately because of the relatively small size of the sample and the large number of items to be assessed. The advantage of CFA over another factor analytic procedure is that various goodness-of-fit statistics are provided during the course of the analysis that assess the closeness of the hypothetical model to the empirical data. Goodness-of-fit was assessed with maximum likelihood χ2 and robust Satorra-Bentler χ2 (robust S-B χ2) values, the Comparative Fit Index (CFI), Robust Comparative Fit Index (RCFI), and both Cronbach’s alpha coefficient and the reliability coefficient rho.Reference Dobalian, Callis and Davey 27 , Reference Bentler 28 The Robust S-B χ2 was used in addition to normal maximum likelihood methods because it is appropriate when the data depart from multivariate normality, and it also adjusts for a relatively small sample size. The CFI and RCFI range from zero to one and reflect the improvement in fit of a hypothesized model over a model of complete independence among the measured variables. The CFI and RCFI values at 0.95 or greater are desirable, indicating that the hypothesized model reproduces 95% or more of the covariation in the data. This study was approved by the Institutional Review Board of the VA Greater Los Angeles Healthcare System (Los Angeles, California USA).
Excessive Missing Values
Due to the specialized nature of a small subset of items in the protocol, some of the items were not applicable to the majority of the VAMC facilities. These items could not be used in the CFA, but are nevertheless reported here.
Results
Individual CFAs of the Six MAs
The individual CFAs by MA received acceptable fit statistics with some exceptions. Some individual items did not have adequate factor loadings within their hypothesized factor (or MA) and were dropped from the analyses in order to obtain acceptable fit statistics. For both phases, supplementary correlated error residuals were determined by the Lagrange Multiplier testReference Hu and Bentler 29 to improve fit. The findings are reported in more detail below.
Tables 1-6 report the results from the individual CFA analyses by each MA. The results include factor loadings for the items that were accepted for the individual CFAs in each MA, as well as the items that were dropped either due to too many missing values or not having adequate fit in the specific MA.
Abbreviations: Comprehensive Emergency Management Program; CFA, confirmatory factor loadings; LF, Low Factor Loading (<.3); M, Too Many Missing; MA, Mission Area; NI, Not Included.
Abbreviations: Comprehensive Emergency Management Program; CFA, confirmatory factor loadings; LF, Low Factor Loading (<.3); M, Too Many Missing; MA, Mission Area; NI, Not Included.
Abbreviations: Comprehensive Emergency Management Program; CFA, confirmatory factor loadings; LF, Low Factor Loading (<.3); M, Too Many Missing; MA, Mission Area; NI, Not Included.
Abbreviations: Comprehensive Emergency Management Program; CFA, confirmatory factor loadings; LF, Low Factor Loading (<.3); M, Too Many Missing; MA, Mission Area; NI, Not Included.
Abbreviations: Comprehensive Emergency Management Program; CFA, confirmatory factor loadings; LF, Low Factor Loading (<.3); M, Too Many Missing; MA, Mission Area; NI, Not Included.
Mission Area 1
For Phase I, Program Management consisted of 12 original items (Table 1, Column 1). All items had significant factor loadings and all of the 12 items were retained. Fit statistics met criteria for acceptability: Maximum-likelihood statistics: ML χ2=87.30, 54 degrees of freedom (df); CFI=.96, Robust S-B χ2=76.24, 54 df; RCFI=.96. All hypothesized factor loadings were significant (P≤.001; Table 1 shows the factor loadings). Note that the robust statistics were somewhat better than non-robust statistics here in that one wants Chi-square to be small in relation to the degrees of freedom. The normalized estimate of multivariate kurtosis was 5.52, which was analogous to a z-score. Reliability statistics were excellent: Cronbach’s alpha=.92 and the reliability coefficient rho=.92.
For Phase II, Program Management consisted of 14 original items (Table 1, Column 2). One item was dropped before the analysis due to a low response rate (n=78); this item, 1.9 (Development, Implementation, Management, and Maintenance of a Research Program Emergency Operations Plan), was not applicable to many of the VAMCs. All 13 remaining items had significant factor loadings and were retained. One correlated error residual was added between 1.6 (Incorporation of Preparedness Planning into the Facility’s Comprehensive Emergency Management Program) and 1.7 (Incorporation of Continuity Planning into the Activities of the Facility’s Emergency Management Program to Ensure Organizational Continuity and Resiliency of Mission Critical Functions, Processes, and Systems) based on the Lagrange Multiplier test. The correlation was .44. Fit statistics were quite acceptable: Maximum-likelihood statistics: ML χ2=86.20, 64 degrees of freedom (df); CFI=.97, Robust S-B χ2=77.25, 64 df; RCFI=.98. All hypothesized factor loadings were significant (P≤.001; Table 1 shows the factor loadings). Reliability statistics were excellent: Cronbach’s alpha=.92 and the reliability coefficient rho=.91. Overall, the fit statistics were consistently robust in both phases; however, in Phase II, there were two more items and the fit statistics improved.
Mission Area 2
For Phase I, Incident Management consisted of eight original items (Table 2, Column 1). Three items (2.1.1, 2.2, and 2.3) were dropped from the final analysis due to very low factor loadings. Fit statistics for the remaining items were quite good: ML χ2=14.25, 4 df; CFI=.98, Robust S-B χ2=9.73, 4 df; RCFI=.98. Based on the Lagrange Multiplier test, one correlated error residual was added between 2.4 and 2.5 for fit improvement (correlation=.90); they were relatively similar in nature because they referred to demobilization and a return to readiness: Multivariate kurtosis=14.36, Cronbach’s alpha=.86 and the reliability coefficient rho=.80.
For Phase II, Incident Management consisted of nine original items (Table 2, Column 2). Two correlated errors were added for fit improvement. One was between 2.2 (Public Information Management Services during an Incident) and 2.3 (Dissemination of Personnel Incident Information to Staff during an Incident [r=.53]). The other was between 2.5 (Processes and Procedures for Demobilization of Personnel and Equipment) and 2.6 (Processes and Procedures for a Return to Readiness of Staff and Equipment [r=.64]). Fit statistics for the items were quite good: ML χ2=32.97, 25 df; CFI=.98, Robust S-B χ2=29.79, 25 df; RCFI=.98. Cronbach’s alpha=.80 and the reliability coefficient rho=.75.
Overall, there seemed to be an improvement between the two phases, as three items in Phase I had low factor loadings and were dropped from the analysis. In Phase II, however, all original eight items were included in MA 2 (with a good fit), as well as one additional item that originally was included in Phase I MA 4 (4.1.4/Dissemination of Personnel Incident Information to Staff During an Incident), forming a nine-item construct for this domain.
Mission Area 3
For Phase I, Safety & Security had nine original items. Two items (3.3 and 3.5) were dropped due to large amounts of missing data, and two items (3.2 and 3.4.1) were dropped due to very low factor loadings (Table 3, Column 1). ML χ2=6.55, 5 df; CFI=.98, Robust S-B χ2=6.22, 5 df; RCFI=.98; Multivariate kurtosis=14.36, Cronbach’s alpha=.62 and the reliability coefficient rho=.67.
For Phase II, Safety & Security had 10 original items. Two items (3.3 and 3.5) were dropped due to large amounts of missing data (Table 3, Column 2). Two correlated errors were added for fit improvement. These included: 3.1.2 (Processes and Procedures for Sheltering-in-Place) and 3.1.3 (Processes and Procedures for Sheltering Family of Critical Staff [r=.57]), and 3.6 (Physical Security and Police Operations during an Emergency) and 3.2 (Perimeter Management of Access and Egress to Facility during an Incident; eg, Lock Down [r=.47]). ML χ2=32.98, 18 df; CFI=.93, Robust S-B χ2=30.53, 18 df; RCFI=.92. Cronbach’s alpha=.71 and the reliability coefficient rho=.63.
Overall, there was an improvement in the fit statistics between the two phases. In Phase I, of the original seven items, five items had a good fit and two items were dropped due to low factor loadings. In Phase II, however, the seven original items all had a good fit with the addition of a new item, 3.6/Physical Security and Police Operations during an Emergency.
Mission Area 4
For Phase I, Resiliency & Continuity had too many items to load on only one factor (Table 4, Column 1). It originally had 26 items, one (4.3.1) of which had missing data and was dropped from the analysis. Mission Area 4 was split into two meaningful sub-scales with an initial exploratory factor analysis which led to two well-fitting CFAs. The sub-scales reflected the two constructs within the proposed MA. The first factor had seven items, as indicated in Table 4, and reflected Mission Critical Systems Resiliency, and also had a good fit: ML χ2=26.38, 13 df; CFI=.92, Robust S-B χ2=23.00, 13 df; RCFI=.93. Multivariate kurtosis=2.62, Cronbach’s alpha=.72 and the reliability coefficient rho=.69. One reasonable correlated error residual was added between the residuals of the items 4.2.8 (Maintaining Heating, Ventilation, and Air Conditioning Resiliency) and 4.2.1 (Development, Implementation, Management, and Maintenance of an Electrical Power System). The second factor has nine items, reflected Health Care Service System Resiliency, and had a good fit: ML χ2=44.26, 35 df; CFI=.96, Robust S-B χ2=36.06, 35 df; RCFI=.99, multivariate kurtosis=13.33, Cronbach’s alpha=.75 and the reliability coefficient rho=.75.
For Phase II, Resiliency & Continuity was split into three sub-scales (Table 4, Column 2). Sub-domain 4.1, Mission Critical Systems Resiliency, originally had 12 items. Four (4.1.1, 4.2.9, 4.2.10, and 4.2.12) were dropped due to low factor loadings. The remaining items had a good fit: ML χ2=26.28, 18 df; CFI=.94, Robust S-B χ2=25.24, 18 df; RCFI=.95, Cronbach’s alpha=.69 and the reliability coefficient rho=.65. Two correlated error residuals were added. These included 4.2.2 (Management and Maintenance of Fixed and Portable Electrical Generator Resiliency) and 4.2.3 (Maintaining Fuel, Fuel Storage, and Fuel Pumps for Generators, Heating, and Vehicles Resiliency [r=.36]), and 4.2.3 (Maintaining Fuel, Fuel Storage, and Fuel Pumps for Generators, Heating, and Vehicles Resiliency) and 4.2.7 (Maintaining Medical Gases and Vacuum Resiliency [r=.23]).
Sub-domain 4.2, Communications, had four items and had a very good fit: ML χ2=3.04, 2 df; CFI=.98, Robust S-B χ2=3.54, 2 df; RCFI=.97. Cronbach’s alpha=.56 and the reliability coefficient rho=.56. No supplementary correlated errors were necessary.
Sub-domain 4.3, Health Care Service System Resiliency (HCSR), had eight items with an excellent fit. One supplementary correlated error residual was added for fit improvement 4.4.3 (Specialty Outpatient Services) and 4.4.4 (Provision of Ambulatory Clinical Services [r=.27]). ML χ2=25.73/19 df; CFI=.96, Robust S-B χ2=24.14/19 df; RCFI=.97. Cronbach’s alpha=.76 and the reliability coefficient rho=.75.
Overall, the original 26 items created two sub-scales in Phase I. In Phase II, the fit improved with three sub-scales: Mission Critical Systems Resiliency, Communications, and Health Care Service System Resiliency. In Phase I, six items had low factor loadings, whereas in Phase II, there were four items with low factor loadings that were excluded from CFA.
Mission Area 5
For Phase I, Medical Surge had nine original items; three (5.2, 5.3, and 5.4.1) were dropped due to very low factor loadings (Table 5, Column 1). ML χ2=12.18, 9 df; CFI=.98, Robust S-B χ2=9.53, 9 df; RCFI=.99. Multivariate kurtosis=8.88, Cronbach’s alpha=.70 and the reliability coefficient rho=.72.
For Phase II, Medical Surge had eight items (Table 5, Column 2). Two correlated error residuals were added between 5.1 (Processes and Procedures for Expansion of Staff for Response and Recovery Operations) and 5.3.2 (Designated Capability for Expanded Patient Triage, Evaluation, and Treatment during Surge [r=.16]) and 5.2 (Management of External Volunteers and Donations during Emergencies) and 5.3.3 (Designation and Operation of Isolation Rooms [r=.17]). Fit indexes were reasonable: ML χ2=24.69/18 df; CFI=.94, Robust S-B χ2=22.44/18 df; RCFI=.95. Cronbach’s alpha=.71 and the reliability coefficient rho=.69.
Overall, there was an improvement from Phase I to Phase II for MA 5. From the original nine items in Phase I, six items showed a good fit for this domain, but in Phase II, there was an improvement with eight items fitting this domain.
Mission Area 6
For Phase I, Support to External Requirements included five original items (Table 6, Column 1). One item (6.1.1) had too much missing data and was dropped. With only four items, χ2 statistics were minimal due in large part to the low number of degrees of freedom (df): ML χ2=0.43, 2 df; CFI=1.00, Robust S-B χ2=0.31, 2 df; RCFI=1.00; Multivariate kurtosis=6.79, Cronbach’s alpha=.64 and the reliability coefficient rho=.71.
For Phase II, Support to External Requirements had six original items; one item (6.1) had too much missing data and was dropped (Table 2, Column 2). Fit indexes were quite good: ML χ2=7.72, 5 df; CFI=.98, Robust S-B χ2=6.76, 5 df; RCFI=.99. Cronbach’s alpha=.74 and the reliability coefficient rho=.75. No supplementary correlated errors were necessary.
Finally, for MA 6 in Phase II, one item was not included in the assessment and three new items were added. Given these changes, only three items were consistent between the two phases; one item had substantial missing data which was dropped from both (Phase I and Phase II) CFA analyses. For Phase I MA 6, four items were included in the analysis with a reasonably good fit. In Phase II, five items were included, again with a reasonably good fit.
Discussion
With limited resources, hospitals must comply with a variety of standards for maintaining access to care during natural and manmade disasters and emergencies. While many health care organizations have invested in the necessary personnel and critical infrastructure to perform these activities, there is no comprehensive, clear standard by which these preparedness resources can be measured routinely and consistently in advance of the need for emergency implementation. Recognizing the need for such a standard, the US Centers for Medicare and Medicaid Services (Baltimore, Maryland USA) proposed a rule on December 27, 2013 that would impose certain emergency preparedness requirements on suppliers and providers, including hospitals, which wish to participate in Medicare and Medicaid. Nevertheless, the need for a validated tool that measures effective emergency preparedness would remain even if the rule is enacted as proposed.
Health care organizations with CEMPs confront numerous uncertainties regarding the relative need for various capabilities, depending on the unique vulnerabilities, needs, and resources of their organization. However, there is very likely a subset of capabilities that would be required and prioritized by most, if not all, health care organizations in order to address an all-hazards approach to natural and manmade disasters. Accordingly, this study leveraged information from two phases of emergency capability assessments of VA hospitals and analyzed the assessed items to verify pre-defined key factors (or MAs) that were deemed by content experts to be related to readiness.
Concepts such as “health care system preparedness” and “medical surge” do not have natural units of measurement. Under these circumstances, it is commonplace to use proxy measures and assess the extent to which these measures are correlated with a (latent) construct, herein each MA. A CFA commonly is employed in such circumstances, and therefore, this study used CFA to analyze the two phases of hospital preparedness capabilities data. The findings from the CFAs of both CEMP Phase I and Phase II indicate that the items (capabilities) added in the Phase II CEMP assessments improved the fit across all six MAs. This result suggests that the Phase II modified tool is better able to assess the synergy and associations among the items in the pre-determined MAs. The findings from these analyses also indicate that for each MA, except for MA 4, the CFA confirmed one latent variable. For MA 4, the original 26 items did not load into one factor. Instead, in Phase I, two sub-scales (seven and nine items in each respective sub-scale) and in Phase II, three sub-scales (eight, four, and eight items in each respective sub-scale) were confirmed. This finding indicates that the pre-assigned MA 4 capabilities make up multiple sub-domains and future assessment protocols should consider the newly identified re-classification of MA 4 into three distinct MAs.
It is also important to note that dropping items from the CFAs because of poor fit to the latent constructs does not imply that the dropped items are unimportant or that the items should necessarily be removed from the assessment tool. Rather, a poor fit merely indicates that the item is not highly correlated or associated with the pre-assigned MA. The item may load onto another unidentified domain. Input from content experts is needed to decide whether to retain or discard these items in future assessment protocols. Items that were dropped from the CFAs because of too many missing values should also be assessed by content experts to determine the reason(s) for the low response rate. If an item is not applicable to a facility, it should be indicated as such, and ideally, that specific item should not be included in the assessment tool for that particular facility. For example, one capability addressed the presence of an on-site fire department, although most facilities use local fire protection services and only a handful of VAMCs have a fire department on-site.
The assessments do not attempt to rank the importance of each MA; anecdotal evidence from VA emergency managers suggests that infrastructure resiliency and medical surge were perceived to be the most important areas to readiness, followed next by program structure and management. Non-VA hospitals that seek to assess their own preparedness may choose to focus on those key MAs for their initial assessments.
Limitations
The study has limitations. The two CFAs represent the initial steps towards establishing a reliable and valid measure of hospital preparedness. The low number of VAMCs and the high number of indicators (69 or 71 capabilities) necessitated that the CFAs be conducted separately for each MA without exploring other possible factor groupings or overlap. Accordingly, more research is needed to further assess the reliability and validity of the assessment tool. Further research also is needed to determine whether additional MAs may be identified and which capabilities are most critical for a uniform assessment tool. This study did not test for inter-rater reliability, as such data were not collected. Given the lack of data on inter-rater reliability, it is not clear how significant an issue this is for the assessments.
This study used a strictly quantitative approach to assessing the MA. Future work in this area would benefit from a mixed qualitative and quantitative approach that would provide a more complete and in-depth understanding of the challenges and issues in developing an assessment tool for measuring hospital preparedness. For example, focus groups and key informant interviews with assessors, emergency managers, and other key personnel would better inform the processes involved in developing the CEMP assessment tool.
The sample is limited to VAMCs. All VAMCs are required to be accredited by The Joint Commission, and thus meet their emergency preparedness requirements. Moreover, during disasters, VAMCs may be called upon to provide care to pediatric or other populations that the VA does not traditionally serve. Nevertheless, non-VA hospitals may differ in certain key aspects that would require modifications to certain capabilities prior to the use of the CEMP tool and assessment process. For example, a few VAMCs have their own on-site fire department. Furthermore, VAMCs are part of the largest integrated health care system in the US, and thus may have access to resources that exceed those available to many other facilities during disasters. In addition, many VAMCs are Federal Coordinating Centers, and thus may have more deployable assets than other hospitals.
The initial version of the assessment tool was developed in 2008 to reflect all US regulatory and VA requirements for an emergency management program specific to a health care system. As such, the tool predates the 2011 World Health Organization (WHO; Geneva, Switzerland) Hospital Emergency Response Checklist and the earlier 2009 WHO Hospital Preparedness Checklist for Pandemic Influenza, on which the 2011 WHO checklist was built. Nevertheless, there is substantial overlap in the content areas of both the 2011 WHO checklist and the VA tool reported here; the nine key components of the WHO checklist are all included within the VA tool. In addition, many of the recommended reading materials referenced by the WHO checklist were used as foundational documents for the development of the VA tool. However, the VA tool is broader and more detailed in its coverage of recommended actions for a hospital emergency management program compared to the WHO checklist. The VA tool also includes a more comprehensive 5-point rating system (exemplary, excellent, developed, being developed, or needs attention) than the WHO rating system (completed, in progress, or due for review).
Conclusion
The CEMP assessment process described here represents a comprehensive, but initial, step in creating a reliable and valid measure of hospital preparedness. Based on the results of the reported analysis, a modified version of Phase II of the CEMP that re-structures MA 4 into three sub-scales while maintaining the other five MAs would provide a solid foundation for hospitals to conceptualize and assess hospital preparedness and resiliency.
The CEMP provides important metrics for VAMCs and should be beneficial for both VA and non-VA facilities to focus on improving their general preparedness and identifying where opportunities to improve readiness exist. For example, the VA uses the results of this tool to create improvement plans that serve as the basis for requests for funding and technical assistance to improve the readiness of VAMCs. While imperfect, the CEMP assessments represent the most comprehensive efforts to date known to the research team to assess the preparedness of hospitals for natural and manmade disasters. As such, the CEMP assessment process and metrics provide a comprehensive and consistent, but flexible, approach for improving health system preparedness that potentially could be adapted into a standard for hospital readiness.