Introduction
Assessing hospital disaster preparedness is fraught with many difficulties. No clear method of assessment and preparation exists due to the limitations of disaster research which usually consists of retrospective case studies reported in the literature.Reference Auf der Heide1 Disaster research can provide an opportunity to examine how different health system components would combine to respond to a mass-casualty event (MCE). Assessing the quality of performance by institutions in preparedness exercises requires data collection methods that are valid and reliable, to allow for meaningful comparisons both across jurisdictions and within institutions. Several studies have been performed to assess validity and standardized methods of hospital preparedness,Reference Kaji, Langford and Lewis2–Reference Kaji and Lewis4 but the current state of the literature does not validate one method of assessing disaster preparedness over another.
A primary tenet of hospital disaster preparedness is surge capacity.Reference Kaji, Koenig and Lewis5, Reference Barbisch and Koenig6 It is listed as one of the medical preparedness goals of the Pandemic and All-Hazards Preparedness Act,7 is a priority of the National Bioterrorism Hospital Preparedness Program in the United States,8 and according to the American College of Emergency Physicians is defined as “a measurable representation of ability to manage a sudden influx of patients [depending] on a well functioning incident management system and the variables of space, supplies, staff, and special considerations.”9 However, as surge capacity will be primarily self-reported by the receiving hospitals, it is important that these hospitals have accurate surge-capacity assessments for organization and planning in preparation for an MCE.
This study compared self-reported surge capacity data previously reported by the hospital's contact person with a subsequent on-site survey conducted by the authors to assess the accuracy of the self-reported data.
Methods
Self-Reported Data
From May through August of 2009—well before the June 11 to July 11, 2010 FIFA World Cup tournament in Cape Town, South Africa—survey questions were sent to nine hospitals in Cape Town. The survey questions were part of a long-distance tabletop drill (LDTT) to assess disaster preparedness in potential receiving hospitals. The drill lasted a total of 10 weeks. Following the data collection, each hospital received a report card assessing measures of disaster preparedness. The data collected included self-reported surge capacity in the medical intensive care unit (MICU), neonatal intensive care unit (NICU), the medical/surgical wards (floor), and the respiratory intensive care unit (RICU). The details of this study can be found in the methods section of the authors’ prior publication.Reference Valesky, Silverberg and Gillett10 When reporting surge capacity, respondents were expected to give a numerical report of how many total patients their respective hospitals could treat in each unit during a specific incident. Estimations of surge capacity were based on how an average physician or disaster contact person would prepare for an MCE, regardless of country or medical system.
On-site Survey
Six months following the 2010 World Cup, two authors—an emergency physician and a hospital administrator (JA and BP)—traveled to eight of the nine hospitals that participated in the LDTT to conduct an on-site survey for the present study. Both authors had been involved with the previously self-reported Internet-based drill and were trained in data collection methods prior to their arrival in South Africa. All interviews were conducted in English by the surveying authors. Potential cultural biases were addressed since one of the authors (JA) is a native of Johannesburg, South Africa and is involved in both disaster medicine and emergency medicine in the US and South Africa. All hospitals from the prior study were included in the follow-up study. One hospital chose not to participate, citing time constraints as the reason; no other hospitals were excluded. The on-site survey at the participating hospitals consisted of an assessment by JA and BP of the surge capacity of the selected units including MICU, NICU, floor, and RICU. The authors received assistance from hospital contact persons, but the final assessment of surge capacity rested with the on-site team.
Hospital and Contact Person Participation
All of the contact persons interviewed by the survey team were responsible for their specific hospital's disaster plan and data collection. Most of these individuals worked in their hospital's emergency department, with some representing hospital managers. All participants had knowledge of their respective hospital's disaster preparedness capabilities. These individuals were readily available to help the on-site team obtain information to prepare their estimates of surge capacity in the designated units.
Data Collection
All data collection was performed on-site at the participating hospitals during the survey. Full access was given by the individual hospitals to all of the disaster preparedness facilities and equipment.
Statistical Analysis
Statistical analysis was performed by obtaining the confidence interval of a proportion using the modified Wald method. Confidence intervals of proportions were calculated using the GraphPad QuickCalcs Web site: http://graphpad.com/quickcalcs/confInterval2/ (accessed August 2011).
Results
The surge capacities of the RICU, MICU, NICU, and adult medical/surgical floor are shown in Figures 1-4. Each figure compares the previously collected surge capacity data with data collected during the on-site survey. Of the hospitals that chose to participate, one of those hospitals did not self-report data on RICU beds and adult medical/surgical floor beds and was not included in the analysis of those units, but their other units were included in the analysis. Figure 5 shows underreporting by the hospital contact persons of surge capabilities, by medical unit, in comparison with the data reported during the on-site validation survey. Underreporting ranged from 71% to 100%.
Discussion
The data showed that the majority of the individual hospital's contact persons in the original LDTT were underreporting surge capacity compared with the follow-up, on-site survey (Figures 1-5). This trend held when looking across different hospital units such as the RICU, MICU, NICU, and medical/surgical floor beds. However, there appeared to be a greater discrepancy between critical-care beds and medical/surgical “floor” beds (Figure 4). In some settings there were wide disparities between the numbers self-reported by the contact person and by the on-site inspection team. These results may have implications for future disaster research in that self-reported hospital surge capacity may be underreported and may be unit-dependent.
The reasons for this extensive underreporting of surge capacity in this study may be multiple. First, in their surge capacity estimations, the contact persons in this study may not have considered utilizing “equivalent” beds from other areas of the hospital that could be used as a critical care bed in an MCE. An example would be to use post-anesthesia care unit beds during an MCE for critically ill persons when the critical care units have reached full capacity. This may not have been apparent to the individual hospital's contact persons during the original LDTT when reporting surge capacity, and it is likely that other hospitals may underestimate their critical care surge capacity in similar events. The data shows that this impact on reporting may be greater in the critical care setting such as in the RICU or the MICU than for general “floor” beds. Additionally, as there were varying levels of disparity in almost all the units assessed, it is possible that not all of the hospital's contact persons were similarly trained. The uniformity of the underreporting suggests that other hospital disaster contact persons also may be underreporting surge capacity capabilities in anticipation of MCEs; a lack of standardized training may account for many of these disparities and may be a topic for future research.
As the on-site survey team consisted of a physician and an administrator, they were ideally suited for estimating hospital resources. In addition, prior to their arrival in Cape Town they underwent specific training for assessing surge capacity. The fact that the on-site survey team members were able to identify additional surge capacity beds compared with the number of beds reported by contact persons in the original estimates in the LDTT may be a product of a lack of standardized training with the contact persons, or it could be due to the artificiality of the drill itself.
It is unlikely that cultural or language biases were the source of these disparities as one of the inspecting authors was a native of South Africa with emergency medicine and disaster medicine experience. Therefore it follows that these disparities may potentially be applicable to hospitals in other countries and continents and may be a systemic problem in preparation and planning for future large-scale gatherings or MCEs.
Additionally, these discrepancies between the survey estimates and the contact person's estimates during the LDTT may have been influenced by the impending 2010 FIFA World Cup. It is not unreasonable to suggest that the contact persons made conservative estimates regarding their disaster preparedness prior to the World Cup to maintain heightened vigilance in the setting of a potential MCE. These conservative estimates may have led to further underreporting of the individual hospital's surge capacity.
Limitations
This study has several limitations. The self-reported data from the LDTT was obtained approximately 18 months prior to the on-site survey. As there was no way to access the official records of hospital bed capacity, it is not known whether units of the hospitals could have been expanded or decreased during this time, thus leading to discrepancies between data sets. It was also found that having hospitals report surge capacity with raw data was far more difficult than had been expected.
Strict definitions as to what constitutes a surge capacity bed were not given, as it is unrealistic to expect that such detailed instructions would be given in an actual disaster when hospitals may be forced to function autonomously with limited resources and report surge capacity to a central command center. This lack of definition created difficulty in both reporting and analyzing the data. Additionally, it is difficult to make generalizable statements concerning the effectiveness of a disaster preparedness drill based on few selected variables; planning involves many factors, including the communication among many different agencies, the experience of the planners and organizers in the command center, the infrastructure within and among hospitals, and other areas beyond surge capacity. Even if one could argue that surge capacity is a central element in an MCE, this one aspect of disaster preparedness encompasses many domains such as equipment, personnel, and facilities.Reference Barbisch and Koenig6 Thus, determining which surge capacity measure is the most appropriate to study to assess disaster preparedness is another central question to be addressed through further studies in disaster preparedness.
Conclusions
The data set showed each hospital's self-reported surge capacity to be underreported with wide disparities when compared with an on-site survey with trained assessors. These findings were consistent with multiple units in various hospitals in Cape Town, South Africa. These study findings have potential implications for future disaster research and preparedness assessments.