Hostname: page-component-7b9c58cd5d-bslzr Total loading time: 0.001 Render date: 2025-03-16T10:06:57.289Z Has data issue: false hasContentIssue false

Public Health Emergency Preparedness System Evaluation Criteria and Performance Metrics: A Review of Contributions of the CDC-Funded Preparedness and Emergency Response Research Centers

Published online by Cambridge University Press:  13 November 2018

Shoukat H. Qari*
Affiliation:
Office of Science and Public Health Practice, Office of Public Health Preparedness and Response, United States Centers for Disease Control and Prevention, Atlanta, Georgia
Hussain R. Yusuf
Affiliation:
Division of Health Informatics and Surveillance, Office of Public Health Scientific Services, United States Centers for Disease Control and Prevention, Atlanta, Georgia
Samuel L. Groseclose
Affiliation:
Office of Science and Public Health Practice, Office of Public Health Preparedness and Response, United States Centers for Disease Control and Prevention, Atlanta, Georgia
Mary R. Leinhos
Affiliation:
Office of Science and Public Health Practice, Office of Public Health Preparedness and Response, United States Centers for Disease Control and Prevention, Atlanta, Georgia
Eric G. Carbone
Affiliation:
Office of Science and Public Health Practice, Office of Public Health Preparedness and Response, United States Centers for Disease Control and Prevention, Atlanta, Georgia
*
Correspondence and reprint requests to Shoukat H. Qari, Centers for Disease Control and Prevention, Office of Public Health Preparedness and Response, 1600 Clifton Rd. NE, MS K-72, Atlanta, GA 30333 (e-mail: sqari@cdc.gov).
Rights & Permissions [Opens in a new window]

Abstract

Objectives

The US Centers for Disease Control and Prevention (CDC)-funded Preparedness and Emergency Response Research Centers (PERRCs) conducted research from 2008 to 2015 aimed to improve the complex public health emergency preparedness and response (PHEPR) system. This paper summarizes PERRC studies that addressed the development and assessment of criteria for evaluating PHEPR and metrics for measuring their efficiency and effectiveness.

Methods

We reviewed 171 PERRC publications indexed in PubMed between 2009 and 2016. These publications derived from 34 PERRC research projects. We identified publications that addressed the development or assessment of criteria and metrics pertaining to PHEPR systems and describe the evaluation methods used and tools developed, the system domains evaluated, and the metrics developed or assessed.

Results

We identified 29 publications from 12 of the 34 PERRC projects that addressed PHEPR system evaluation criteria and metrics. We grouped each study into 1 of 3 system domains, based on the metrics developed or assessed: (1) organizational characteristics (n = 9), (2) emergency response performance (n = 12), and (3) workforce capacity or capability (n = 8). These studies addressed PHEPR system activities including responses to the 2009 H1N1 pandemic and the 2011 tsunami, as well as emergency exercise performance, situational awareness, and workforce willingness to respond. Both PHEPR system process and outcome metrics were developed or assessed by PERRC studies.

Conclusions

PERRC researchers developed and evaluated a range of PHEPR system evaluation criteria and metrics that should be considered by system partners interested in assessing the efficiency and effectiveness of their activities. Nonetheless, the monitoring and measurement problem in PHEPR is far from solved. Lack of standard measures that are readily obtained or computed at local levels remains a challenge for the public health preparedness field. (Disaster Med Public Health Preparedness. 2019;13:626-638)

Type
Systematic Review
Copyright
Copyright © 2018 Society for Disaster Medicine and Public Health, Inc. 

Considerable financial and human resources are devoted to public health emergency preparedness and response (PHEPR) activities in the United States. Since 2002, the Public Health Emergency Preparedness (PHEP) cooperative agreement has provided more than $11 billion to US health departments to upgrade their ability to respond effectively to a range of public health threats, including infectious diseases, natural disasters, and biological, chemical, nuclear, and radiological events. 1 , 2 Therefore, it is necessary to identify and monitor valid and reliable measures of PHEPR system performance and impact.Reference Nelson, Lurie and Wasserman 3 , Reference Altevogt, Pope and Hill 4

Beginning in FY 2008, the Centers for Disease Control and Prevention (CDC) funded nine Preparedness and Emergency Response Research Centers (PERRCs) in accredited schools of public health to conduct research to strengthen public health preparedness systems.Reference Qari, Abramson and Kushma 5 , Reference Leinhos, Qari and Williams-Johnson 6 The conceptual approach to the PERRC program goals and objectives was guided by an Institute of Medicine (IOM) letter report generated at the request of the CDC to define near-term (3-5 years) PHEPR research priorities.Reference Altevogt, Pope and Hill 4 The IOM report recommended 4 priority areas for research: 1) enhancing the usefulness of PHEPR training, 2) improving PHEPR communications, 3) creating and maintaining sustainable PHEPR systems, and 4) generating criteria and metrics to measure the efficiency and effectiveness of PHEPR system activities. Following this guidance, the CDC funded the PERRCs to develop and implement research projects and pilot studies to address these research priorities. Each PERRC conducted 3-4 investigator-initiated, interrelated, multiyear research projects for a total of 34 major projects across all PERRCs. To provide some guidance to PHEPR system partners interested in assessing the efficiency and effectiveness of their program activities, in this paper we summarize the PERRC studies that measured 1 or more PHEPR system performance criteria and describe the evaluation tools and metrics developed and assessed.

METHODS

Published PERRC studies were identified by review of progress reports submitted by the PERRCs to the Office of Public Health Preparedness and Response (CDC) and PubMed literature searches and scanning of references in PERRC papers identified in progress reports or PubMed searches. The initial inclusion criteria were that the publications originated from the projects funded by CDC PERRC program in 1 of the 9 academic centers, the CDC funding was indicated in the acknowledgement section of the publications, and the publications were reported in the progress reports submitted by the grantees to the CDC. All included PERRC publications met these 3 criteria. The publications were identified by 1 coauthor (HY) and the initial list was further confirmed by 2 additional coauthors (SQ, ML). The identified PERRC publications were then independently reviewed by the authors to identify studies addressing the development or assessment of criteria (what system components to measure? eg, planning, structures, processes) and metrics (measures of system performance) to understand PHEPR system performance. Those articles that provided relevant information on the development and assessment of criteria and metrics were included in the final set of articles included in this review. The final set of publications and the summary of findings from the studies was reviewed and concurred to by all authors. Published papers were reviewed to capture the main research objectives, methods used to develop or assess PHEPR system evaluation criteria or metrics, criteria or metrics developed or assessed, and conclusions drawn from each study. Studies were grouped into 1 of 3 PHEPR system domains, based on the criteria or metrics: (1) organizational characteristics, (2) emergency response performance, and (3) workforce capacity or capability. While all PERRC studies we analyzed addressed the IOM priority for generating criteria and metrics for measuring efficiency or effectiveness, some studies also addressed other IOM research priorities.

RESULTS

We identified 171 articles published by PERRCs that met the initial inclusion criteria. These publications addressed IOM-recommended priority PHEPR research areas and were indexed in PubMed from 2009 to 2016. These publications were derived from 34 PERRC research projects. Twenty-nine (17%) of the studies addressed development or assessment of PHEPR system performance evaluation criteria and metrics; these comprised the final set of articles included in this review. The objectives of each study, the methods or criteria used or tools developed to assess PHEPR system performance, and selected details of the criteria or metrics examined by each study are summarized in Table 1. An overview discussion of key study findings is provided below by PHEPR system domain.

Table 1 Published Preparedness and Emergency Response Research Centers (PERRCS) Studies That Developed or Assessed Public Health Emergency Preparedness and Response (PHEPR) System Performance Criteria or Metrics by Domain, 2009-2016

Criteria or Metrics Related to Organizational Characteristics

Nine PERRC studies examined the functions, activities, infrastructure, or other organizational and system features of PHEPR organizations/entities and the relationships of these characteristics to preparedness capacities and capabilities. Six studies assessed the characteristics of local health department (LHD) programs, while the other 3 examined the relationship between health departments and other PHEPR system entities (eg, health care delivery and community-based organizations).

Savoia and colleagues used data from the 2005 National Association of County and City Health Officials national survey of LHDs and applied 21 indicators to assess the association between health department characteristics and emergency preparedness (Table 1).Reference Savoia, Rodday and Stoto 7 , Reference Leep 8 Investigators found that LHDs serving larger populations are more likely to have staff, capacities, and to have conducted activities that support emergency response. For example, 60.5% of the LHDs serving the largest communities reported having a public information specialist, compared with only 3.2% of the smallest communities. After controlling for the size of the population served, health departments with a board of health had higher emergency preparedness capability based on 1 or more measures of emergency preparedness staffing, capacities, activities, and performance. LHDs having a board of health were more likely to have ensured review of their legal authorities, written or updated their emergency plans, conducted drills and exercises, or implemented PHEPR workforce training. The investigators suggest the potential benefit of merging small health departments into coalitions, as such coalitions, collectively, may be more likely to mobilize sufficient trained staff and capacities to ensure a successful emergency response.

Davis et al. developed and tested the Local Health Department Preparedness Capacities Assessment Survey (PCAS).Reference Davis, Mays and Bellamy 9 The survey instrument was fielded with the 85 North Carolina LHDs and 247 comparison LHDs. Interviews conducted as part of instrument development and validation “emphasized that emergency preparedness is a team effort in which different health department personnel have a different working knowledge of facets of preparedness.” Therefore, the authors recommended that the PCAS should be completed by multiple health department staff to increase the reliability of assessment findings.

Using the PCAS, Davis et al and Bevc et al. found decreases in surveillance and investigation and legal preparedness capacities over the 3 survey years, which they attributed to multiple years of funding cuts and job losses specifically for preparedness.Reference Davis, Bevc and Schenck 10 , Reference Bevc, Davis and Schenck 11 Davis et al. found that LHDs that had participated in a performance improvement effort reported significantly greater surveillance and investigation, workforce and volunteer, communication, legal preparedness, exercise and emergency event, and corrective action capacities than comparison LHDs that had not participated in such performance improvement programs.Reference Davis, Bevc and Schenck 12 The authors concluded that the PCAS can be useful to assess how well LHDs are performing preparedness capabilities and identify opportunities for program improvement.

Dalnoki-Veress and colleagues developed and applied an Analytical Hierarchical Process tool to identify potential gaps in LHD emergency preparedness planning during a radiological emergency scenario.Reference Dalnoki-Veress, McKallagat and Klebesadal 13 The tool allowed LHD officials to characterize the perceived relative importance of and challenges related to elements of preparedness. The interexchange of information between LHDs, the public, and state and federal partners, and ensuring the clarity of the information shared were public health officials’ dominant concerns when responding to the radiological emergency. The investigators suggest that this internet-based tool can be used to identify and rank gaps in emergency preparedness capacity and capability at the local level across a range of emergency scenarios.

Hunter et al. assessed the capabilities and functions activated by LHDs, emergency medical services, and hospitals during response to a statewide simulated improvised explosive device emergency.Reference Hunter, Yang and Petrie 14 Investigators identified 5 core capabilities commonly activated across all agencies, as well as 1 to 4 activities frequently activated by specific agencies. The majority of respondents reported that interorganizational information sharing was hampered by the complete or partial failure of communications equipment or systems. Investigators recommend further use of the novel epidemiological-concept-based exercise model for conducting exercise-based evaluation and illustrated, through its use, the need for early discussion among emergency response partners about expected and actual organizational roles, responsibilities, and resource capacities within the PHEPR system.

Agboola et al. developed an online toolkit for measuring performance in emergency response exercises using performance measures tested in over 60 emergency preparedness exercises.Reference Agboola, Bernard and Savoia 15 The toolkit was pilot tested in collaboration with 10 public health agencies and 4 health care agencies from 8 states. Thirteen (93%) exercise planners reported that the performance measures generated from the toolkit were appropriate for the creation of exercise evaluation forms to collect data useful for evaluating their organization’s performance during the exercise and preferred the toolkit to the Homeland Security Exercise Evaluation Program exercise evaluation guides. The investigators suggest that use of this toolkit could improve the quality, standardization, and comparability of the performance data collected during simulated emergencies.

Glik and colleagues examined health department partnerships with community-based organizations (CBOs) and faith-based organizations (FBOs), and developed and evaluated a tool LHDs can use to assess the frequency and nature of their collaborative activities with CBOs/FBOs.Reference Glik, Eisenman and Donatello 16 The Assessment for Disaster Engagement with Partners Tool (ADEPT) evaluates collaboration across 4 domains: communication outreach and disaster coordination (eg, if LHD participated in emergency response at CBO/FBO locations), resource mobilization for disasters (eg, if CBOs/FBOs provided services during a disaster), organizational capacity–building for disasters (eg, if LHD had trained CBO/FBO staff in emergency response), and partnership development and maintenance for disasters (eg, if LHD and CBOs/FBOs developed a community-wide disaster preparedness plan that had defined CBO/FBO responsibilities and roles). Investigators suggest that organizations with higher ADEPT scores have more active relationships with CBOs/FBOs, which may enhance community resilience. The use of this assessment tool may allow LHDs to assess how effective these partnerships are, identify areas for improvement, and examine response performance over time.

Criteria and Metrics Related to Emergency Response Performance

Twelve PERRC studies examined public health preparedness system data related to actual event responses, providing a view of response similarities and differences across events, both within and across emergency types. The majority of these investigations focused on either infectious disease events or natural disasters. Investigators employed both quantitative and qualitative methods to examine factors associated with response performance with the aim of improving future performance.

Hunter and colleagues evaluated the local public health system emergency response to the 2011 tsunami threat in California by surveying local public health, emergency medical services, and emergency management agencies using CDC PHEP capabilities framework.Reference Hunter, Crawley and Petrie 17 The distribution of roles most likely to be performed by emergency management agencies (eg, assuming lead incident management role to guide evacuation or community recovery) and public health agencies (surveillance and epidemiology, environmental health, and mental health/psychological support) in response to the tsunami were identified and can inform preparedness planning.

Using CDC’s 15 PHEP capabilities as a framework, Hunter and colleagues compared public health agencies response activities for 120 emergency incidents (event types: infectious diseases, severe weather, chemical, bioagent, radiation, mass casualty) to identify commonalities in the response patterns.Reference Hunter, Yang and Crawley 18 , Reference Hunter, Hunter and Yang 19 Their analyses revealed the ways in which the system’s partner organizations are adaptive to the nature of the threat resulting in differential activation of functions and partners based on the type of incident. For example, important response activities engaged in by public health departments during infectious disease events included public health surveillance and epidemiology, public health laboratory testing, nonpharmaceutical interventions, and information sharing. Responses vary greatly in duration (eg, a 5-hour response to a white powder incident versus the much longer response to the Deepwater Horizon oil spill). The authors suggest that use of this framework to identify and compare response activities by system partner and type of hazard may be useful to predict the resources and capabilities required to respond to future acute emergency events.

Stoto employed an evidence-based logic model of the PHEPR system to frame a case study of the public health response to the 2009 H1N1 pandemic (pH1N1), arguing that capability-based (as opposed to capacity-based) preparedness measures assessed by actual observation of a PHEP system in action are likely to be better indicators of how that system will perform in the future.Reference Stoto 20 During the pH1N1, it was not simply laboratory capacity and availability of surveillance systems, but the actual performance of the laboratories and systems when called upon to detect and characterize a new pathogen and provide accurate situational awareness that demonstrates response effectiveness. Similarly, it is not simply possession of a vaccine or medication dispensing plan but health department collaboration with key public and private organizations in their community to successfully dispense vaccines or medications that is critical to timely prevention of diseases. The investigators suggest that qualitative assessment of the system capabilities of PHEP systems can be more useful than quantitative measures of static capacities, when the focus is real-world performance and quality improvement.

Zhang and colleagues studied performance of disease surveillance and notification systems in Mexico, the United States, Canada, and several other countries during the 2009 pH1N1.Reference Zhang, Lopez-Gatell and Alpuche-Aranda 21 Using a “critical events” approach, they developed a timeline of important epidemiological and public health response events during the disease outbreak, identified critical events that affected the timing of the response, and determined the factors that influenced the timing of the critical events. Critical events considered in the pH1N1 outbreak included timing of the detection of increases in number of cases associated with the disease outbreak, characterization of cases (eg, demographics and disease severity), identification and characterization of the causative agent, and the issuance of notifications about the outbreak. The authors noted that the global pH1N1 response illustrated that enhanced laboratory-based surveillance systems and improved global notification systems both contribute to earlier detection and characterization of emerging and reemerging diseases.

In another study that assessed surveillance systems in the context of the 2009 pH1N1 outbreak, Stoto reviewed the evidence regarding differential age-specific risks associated with pH1N1 infection and surveillance systems’ ability to accurately monitor H1N1 cases over time.Reference Stoto 22 The author noted that while surveillance and response associated with the H1N1 outbreak was acceptable, case finding may have varied by age and case reporting may have been biased due to changes in health-care seeking behaviors resulting from public awareness and concern. The author recommends the use of new surveillance methods in future health emergencies, including conducting telephone surveys of representative populations and seroprevalence surveys in well-defined population cohorts, to supplement surveillance data sources that may be influenced by healthcare-seeking behaviors.

To identify practices associated with more successful implementation of H1N1 vaccination in school-based and public clinics in 2009 by LHDs, Klaiman and colleagues used a “positive deviance” approach based upon the principle of identifying and learning from top performers.Reference Klaiman, O’Connell and Stoto 23 , Reference Klaiman, O’Connell and Stoto 24 The investigators developed a process map to measure success, outlining important steps in implementing vaccination, and obtained opinions from LHD staff peers about performance among an initial sample of LHDs. Having an established LHD relationship with local school authorities, communicating effectively with parents, and ensuring clinic logistics allow for an easy flow of students were practices associated with more successful school-based vaccination campaigns. For public clinic-based vaccination, the findings highlight the importance of defining priority groups to receive vaccination, communicating with the public, maintaining adequate staffing, establishing community partnerships, and maintaining flexibility in implementation.

Piltch-Loeb and colleagues introduced the use of a “critical incident registry” (CIR) for PHEPR system evaluation to learn from actual emergency responses and identified some key characteristics for a CIR to be feasible and useful.Reference Piltch-Loeb, Kraemer and Nelson 25 Establishing and using a CIR was considered helpful to identify and critically analyze rare events and responses to them and to drive learning and quality improvement. The investigators also developed a peer assessment approach in which health departments engage peers to analyze individual critical incidents and report on areas identified for improvement within and across PHEPR systems.Reference Piltch-Loeb, Nelson and Kraemer 26 Based on the field tests and the views of the health professionals who participated in them, the authors suggest that this approach is feasible and leads to a more in-depth analysis of response activities than standard methods. Peer assessment and use of a CIR could be an alternative to standard emergency response quality improvement approaches such as use of After Action Reports (AARs) and improvement plans (IPs).

Following the 2009 pH1N1, Stoto and colleagues conducted a workshop to identify lessons about the PHEP system response performance gleaned from AARs and IPs.Reference Stoto, Nelson and Higdon 27 The workshop participants included state health department and LHD personnel who had prepared the AARs and IPs for CDC review. The participants discussed the various barriers in the response activity, revealing potential lessons concerning situational awareness, resource mobilization, and communications. Many health departments identified the need to strengthen partnerships with health care providers, health care systems, pharmacies, schools, community organizations, insurers, and others to plan for and improve vaccine distribution.

To understand how lessons learned from the response to real incidents may be used to maximize knowledge management and quality improvement, Savoia et al. reviewed the national repository of AARs at the Lessons Learned Information Sharing website describing the public health system response to the pH1N1 and 3 hurricanes.Reference Savoia, Agboola and Biddinger 28 During these responses, public health systems experienced challenges while implementing 3 PHEP capabilities: emergency public information and warning, information sharing, and emergency operations coordination. Recurring challenges were reported by multiple states and local public health agencies in response to these 2 types of incidents: difficulty in sharing information with external partners, obstacles to timely information release, confusion in roles and responsibilities, and lack of incident command system use. The findings from this study could be helpful in improving practices for measuring system level preparedness and improvement efforts.

Criteria and Metrics Related to Workforce Capacity or Capability

Eight PERRC studies focused on factors affecting emergency response workforce capacity, capability, and performance, ranging from psychosocial attributes to occupational and personnel-related resource considerations.

Barnett et al.(2009) used the Extended Parallel Process Model (EPPM) to develop the Johns Hopkins~Public Health Infrastructure Response Survey Tool (JH~PHIRST) to assess perceptions and attitudes of public health workers in the context of various possible public health emergency scenarios.Reference Barnett, Balicer and Thompson 29 , Reference Witte 30 Based on the responses to open-ended questions that assessed the perception of the threat and efficacy, respondents were grouped into 4 categories. Analysis of survey responses indicated that individuals reporting higher threat and efficacy levels were significantly associated with a positive willingness to respond (WTR). The JH~PHIRST tool may help public health agencies customize training programs to optimize emergency response attitudes in health departments.

In 3 other studies, Barnett et al. and Errett et al. applied the original or a modified EPPM-based JH~PHIRST questionnaire to examine predictors of response willingness among public health workers, evaluate a training curriculum to improve risk- and efficacy-related perceptions and WTR, and examine how psychological preparedness may be associated with WTR.Reference Barnett, Thompson and Errett 31 Reference Errett, Barnett and Thompson 33 WTR was greater for naturally occurring emergency scenarios (weather-related and pandemic influenza) than for terrorism-related emergency scenarios (radiological “dirty” bomb and anthrax bioterrorism). These studies indicated that LHD workers’ WTR was scenario-specific, training may be needed to boost WTR, and those who perceived themselves as psychologically prepared for response were more WTR than others. The authors suggest that training and periodic assessment of WTR metrics could play an important role in improving PHEP program workforce capabilities.

To examine response workforce participation and performance, Savoia and colleagues developed and tested 2 questionnaires for Medical Reserve Corps (MRC) volunteers: one to assess performance and attitudes of volunteers, and another to assess barriers that prevented some volunteers from participating in influenza clinics.Reference Savoia, Massin‐Short and Higdon 34 Factor analysis indicated that 20 items in the Volunteer Self-Assessment Questionnaire could be grouped into 5 factors. Results concerning volunteers’ performance were consistent with observations from LHD staff working with the volunteers and external evaluators observing the flu clinics’ activities. The investigators suggest that the toolkit can be used by MRC coordinators to monitor and modify their effectiveness in engaging volunteers in public health activities.

Horney and colleagues investigated the human, fiscal, informational, physical, and organizational resource capacities of Public Health Regional Surveillance Teams (PHRSTs) colocated in 7 LHDs in North Carolina and evaluated how PHRST capacities may be associated with LHD services they provided.Reference Horney, Markiewicz and Meyer 35 The investigators found that most variation was seen in human resources capacity; variation in team composition was associated with differences in the support and services that PHRSTs provide to LHDs. For example, teams that had a doctor or an epidemiologist had larger budgets and provided more support and services, and teams with a pharmacist reported more partners.

Schuh et al. and Potter et al. studied how responding to an emergency affects the routine functions of health departments.Reference Potter, Schuh and Pomer 37 The investigators developed and pilot tested the Adaptive Response Metric tool. The tool focuses on areas in need of continuity of operations planning, investments in technology, or exercising and training. The tool consists of 5 gradient stages reflecting the amount of change in health department functions in response to an emergency. It includes a weighting scheme based on a function’s respective proportion of the overall budget, enabling development of a weighted metric of overall impact on the health department. The metric was applied to the experience of 4 LHDs in California in the context of their response to the 2009 pH1N1 outbreak. The investigators were able to demonstrate, for example, how different functions of a health department are impacted at different times and to different degrees during an emergency response. The authors suggest that the Adaptive Response Metric protocol could provide information about patterns of variable response burden among LHD subunits that would be useful for improving preparedness planning and response management.

DISCUSSION

The development and use of valid and reliable evaluation criteria reflecting PHEPR activities and performance are important to guide and enable measurement of public health preparedness, response, and recovery capability and capacity. Metrics associated with these criteria can then be used to improve performance and demonstrate impact of programs or interventions. Valid and reliable criteria and metrics can be applied to PHEPR systems to assess the impact of resource allocation and to identify where additional investment may be needed to improve response capability. Given that most public health emergency situations are unpredictable, variable, and rare occurrences, there are limited opportunities to assess and accurately compare response performance.Reference Asch, Stoto and Mendes 38 Furthermore, the contradiction between the concepts underlying metrics, which rely on a degree of standardization, and emergency situations, which are out-of-the-ordinary situations, adds to the complexity of developing and validating preparedness and response metrics.Reference Nelson, Lurie and Wasserman 3 , Reference Altevogt, Pope and Hill 4 The fact that PHEPR is conducted within a complex system with many intervening factors makes it difficult to measure.Reference Kirschenbaum 39 Reference Savoia, Agboola and Biddinger 41

In spite of the challenging nature of the task, our review found that the PERRCs made important contributions towards filling some knowledge gaps in the area of PHEPR system evaluation criteria and performance metrics development and validation. For example, the Adaptive Response Metric tool for measuring the unequal impact of incident response on health department functions or subunits,Reference Schuh, Eichelberger and Stebbins 36 , Reference Potter, Schuh and Pomer 37 combined use of a Critical Incident Registry and peer assessment for after-action learning,Reference Piltch-Loeb, Kraemer and Nelson 25 , Reference Piltch-Loeb, Nelson and Kraemer 26 and the Preparedness Capacities Assessment Survey to assess health department capacities and performance.Reference Davis, Mays and Bellamy 9 illustrate promising approaches to PHEPR system performance monitoring for program improvement. Although the PERRC program commenced in 2008, some of the PERRCs’ evaluation criteria and performance metrics align (post hoc) with the framework of PHEP capabilities published in 2011 and therefore may support monitoring and measurement of PHEP program capabilities. 1 , Reference Hunter, Crawley and Petrie 17 Reference Hunter, Hunter and Yang 19 , Reference Savoia, Agboola and Biddinger 28

Some PERRC studies addressed standardization of metrics in the context of unique events. The Piltch-Loeb et al., Stoto et al., and Savoia et al. studies applied peer assessment strategies, root cause analysis, and the critical incident registry to look across different emergency event types to identify key factors and themes to inform the systematic collection of qualitative or quantitative data to support assessment of past efforts and inform future response efforts.Reference Stoto 20 , Reference Piltch-Loeb, Kraemer and Nelson 25 , Reference Piltch-Loeb, Nelson and Kraemer 26 , Reference Savoia, Agboola and Biddinger 28 This set of approaches can be used together to look at system preparedness and response performance across events.

A number of studies developed and assessed preparedness and response metrics in the context of the 2009 pH1N1 in the United States, which occurred while the PERRCs were in the midst of research projects.Reference Zhang, Lopez-Gatell and Alpuche-Aranda 21 Reference Klaiman, O’Connell and Stoto 24 , Reference Stoto, Nelson and Higdon 27 , Reference Schuh, Eichelberger and Stebbins 36 , Reference Jhung, Swerdlow and Olsen 42 This opportunity to collect information during and immediately following actual emergency responses enabled practitioners and investigators to learn from a real-world event in a timely fashion, when certain questions can be answered more accurately, and was facilitated by the active PERRC infrastructure.

The goals of the IOM recommendation to develop and validate PHEPR program criteria and performance metrics may not have been fully met by this set of studies from the PERRCs. However, PHEPR system evaluation criteria were defined, metrics were developed and tested, and system performance was assessed using a range of methods across a variety of emergency scenarios—both real and simulated. Gaps remaining to be addressed include identifying appropriate evidence-based PHEPR criteria and metrics for assessing system effectiveness at addressing social and behavioral impacts of events, and measuring the public’s emergency response expectations, experience, and satisfaction. The spectrum of topics addressed by the studies we reviewed reflect the complex and wide scope of capacities and capabilities encompassed within the PHEPR system, such as surveillance, public communication, countermeasure delivery, volunteer participation, and community engagement. Methods used for assessing the merits of these measures included identification of the domain structures among questionnaire items through principal component and factor analyses, analysis of the correlation between metrics, and comparison of responses from exercise participants and observers.

The lack of standard measures, especially those that can be readily obtained or computed at local levels, is still a major issue in the preparedness field. Instead of filling this gap by developing metrics for which underlying data are difficult to obtain and cross-validate, there is a need to guide the research community by developing a specific agenda regarding the priorities for metrics development and standardization. While the IOM report recommended generating effective criteria and metrics, it offered no guidance beyond this broad call, highlighting the need for a conceptual framework to drive the identification or development of useful measures of both processes and outcomes through public health systems research.

Our review of published studies from the PERRC program describes relevant contributions made towards the development and assessment of PHEPR system metrics applicable to specific contexts and levels of analysis. Most of these metrics have seen limited dissemination, use, and evaluation outside the research study setting. However, a few of the tools mentioned in the manuscript are available online (Table 1), including Emergency Preparedness Exercise Evaluation Toolkit, Evaluation Toolkit for the Deployment of MRC Units during Flu Clinics and other Public Health Activities, and Assessment for Disaster Engagement with Partners Toolkit.Reference Agboola, Bernard and Savoia 15 , Reference Glik, Eisenman and Donatello 16 , Reference Savoia, Massin‐Short and Higdon 34 A future research and evaluation program for PHEPR system measurement must be informed by a sound conceptual model of how factors in this complex system interact to influence the course of response and subsequent recovery. Lacking such a model, the field may continue to struggle to find consensus on what is most important to measure and, in turn, may continue to produce metrics that are difficult to apply and validate across different contexts.

Acknowledgements

The authors wish to acknowledge and thank the Preparedness and Emergency Response Research Center investigators for their work related to the development and assessment of public health emergency preparedness and response system criteria and metrics, and Todd M. Graham, BBA, MPH for his contributions during the development of this paper.

Disclaimer

The contents, findings, and views contained in this article are those of the authors and do not necessarily represent the official programs and policies of the U.S. Centers for Disease Control and Prevention (CDC), Agency for Toxic Substances and Disease Registry (ATSDR), or the U.S. Department of Health and Human Services. Use of trade names and commercial sources is for identification only and does not imply endorsement by the Centers for Disease Control and Prevention, the Public Health Service, or the U.S. Department of Health and Human Services.

References

REFERENCES

1. US Centers for Disease Control and Prevention. Public Health Emergency Preparedness (PHEP) CooperativeAgreement. https://www.cdc.gov/phpr/readiness/phep.htm. Accessed August 9, 2018.Google Scholar
2. US Centers for Disease Control and Prevention. Public Health Preparedness Capabilities: National Standards for State and Local Planning. http://www.cdc.gov/phpr/capabilities/dslr_capabilities_july.pdf. Published March 2011. Accessed August 9, 2018.Google Scholar
3. Nelson, C, Lurie, N, Wasserman, J. Assessing public health emergency preparedness: concepts, tools, and challenges. Annu Rev Public Health. 2007;28:1-18.Google Scholar
4. Altevogt, BM, Pope, AM, Hill, MN, et al. Research priorities in emergency preparedness and response for public health systems: a letter report. Washington DC: National Academies Press; 2008.Google Scholar
5. Qari, SH, Abramson, DM, Kushma, JA, et al. PERRCs: early returns on investment in evidence-based public health systems research. Public Health Rep. 2014;129(Suppl 4):1-4.Google Scholar
6. Leinhos, M, Qari, SH, Williams-Johnson, M. Preparedness and emergency response research centers: using a public health systems approach to improve all-hazards preparedness and response. Public Health Rep. 2014;129(Suppl 4):8-18.Google Scholar
7. Savoia, E, Rodday, AM, Stoto, MA. Public health emergency preparedness at the local level: results of a national survey. Health Serv Res. 2009a;44(5 Pt 2):1909-1924.Google Scholar
8. Leep, CJ. 2005 National profile of local health departments. J Public Health Manag Pract. 2006 Sep-Oct; 12(5):496-498.Google Scholar
9. Davis, MV, Mays, GP, Bellamy, J, et al. Improving public health preparedness capacity measurement: development of the local health department preparedness capacities assessment survey. Disaster Med Public Health Prep. 2013 Dec;7(6):578-584. doi: 10.1017/dmp.2013.108.Google Scholar
10. Davis, MV, Bevc, CA, Schenck, AP. Declining trends in local health department preparedness capacities. Am J Public Health. 2014a Nov;104(11):2233-2238.Google Scholar
11. Bevc, CA, Davis, MV, Schenck, AP. Temporal trends in local public health preparedness capacity. Front Public Health Serv Syst Res. 2014;3(3).Google Scholar
12. Davis, MV, Bevc, CA, Schenck, AP. Effects of performance improvement programs on preparedness capacities. Public Health Rep. 2014b;129(Suppl 4):19-27.Google Scholar
13. Dalnoki-Veress, F, McKallagat, C, Klebesadal, A. Local health department planning for a radiological emergency: an application of the AHP2 tool to emergency preparedness prioritization. Public Health Rep. 2014;129(Suppl 4):136-144.Google Scholar
14. Hunter, JC, Yang, JE, Petrie, M, et al. Integrating a framework for conducting public health systems research into statewide operations-based exercises to improve emergency preparedness. BMC Public Health. 2012a Aug 20;12:680.Google Scholar
15. Agboola, F, Bernard, D, Savoia, E, et al. Development of an online toolkit for measuring performance in health emergency response exercises. Prehosp Disaster Med. 2015;30(5):503-508.Google Scholar
16. Glik, DC, Eisenman, DP, Donatello, I, et al. Reliability and validity of the Assessment for Disaster Engagement with Partners Tool (ADEPT) for local health departments. Public Health Rep. 2014; 129(Suppl 4):77-86.Google Scholar
17. Hunter, JC, Crawley, AW, Petrie, M, et al. Local public health system response to the tsunami threat in coastal California following the Tōhoku Earthquake. PLoS Curr. 2012b;4:e4f7f57285b804. doi: 10.1371/4f7f57285b804 Google Scholar
18. Hunter, JC, Yang, JE, Crawley, AW, et al. Public health response systems in-action: learning from local health departments’ experiences with acute and emergency incidents. PLoS One. 2013 Nov 13; 8(11):e79457. doi:10.1371/journal.pone.0079457 Google Scholar
19. Hunter, MD, Hunter, JC, Yang, JE, et al. Public health system response to extreme weather events. J Public Health Manag Pract. 2016 Jan-Feb;22(1):E1-10. doi: 10.1097/PHH.0000000000000204 Google Scholar
20. Stoto, M. Measuring and assessing public health emergency preparedness. J Public Health Manag Pract. 2013;19(Suppl 2):S16-21.Google Scholar
21. Zhang, Y, Lopez-Gatell, H, Alpuche-Aranda, CM, et al. Did advances in global surveillance and notification systems make a difference in the 2009 H1N1 pandemic?–A retrospective analysis. PLoS ONE. 2013;8(4):e59893.Google Scholar
22. Stoto, MA. The effectiveness of U.S. public health surveillance systems for situational awareness during the 2009 H1N1 pandemic: a retrospective analysis. PLoS ONE. 2012;7(8):e40984.Google Scholar
23. Klaiman, T, O’Connell, K, Stoto, M. Local health department public vaccination clinic success during 2009 pH1N1. J Public Health Manag Pract. 2013;19(4):E20-E26.Google Scholar
24. Klaiman, T, O’Connell, K, Stoto, MA. Learning from successful school-based vaccination clinics during 2009 pH1N1. J Sch Health. 2014;84:63-69.Google Scholar
25. Piltch-Loeb, R, Kraemer, JD, Nelson, C, et al. A public health emergency preparedness critical incident registry. Biosecur Bioterror. 2014a May-Jun;12(3):132-143. doi: 10.1089/bsp.2014.0007 Google Scholar
26. Piltch-Loeb, RN, Nelson, CD, Kraemer, JD, et al. A peer assessment approach for learning from public health emergencies. Public Health Rep. 2014b;129(Suppl 4):28-34.Google Scholar
27. Stoto, MA, Nelson, C, Higdon, MA, et al. Lessons about the state and local public health system response to the 2009 H1N1 pandemic: a workshop summary. J Public Health Manag Pract. 2013 Sep-Oct; 19(5):428-435. doi:10.1097/PHH.0b013e3182751d3e Google Scholar
28. Savoia, E, Agboola, F, Biddinger, PD. Use of after action reports (AARs) to promote organizational and systems learning in emergency preparedness. Int J Environ Res Public Health. 2012 Aug ; 9(8):2949-2963. doi: 10.3390/ijerph9082949. Epub 2012 Aug 16.Google Scholar
29. Barnett, DJ, Balicer, RD, Thompson, CB, et al. 2009 assessment of local public health workers’ willingness to respond to pandemic influenza through application of the extended parallel process model. PLoS ONE. 2009;4(7):e6365.Google Scholar
30. Witte, K. Putting the fear back into fear appeals: The extended parallel process model. Commun Monogr. 1992;59:329-349.Google Scholar
31. Barnett, DJ, Thompson, CB, Errett, NA, et al. Determinants of emergency response willingness in the local public health workforce by jurisdictional and scenario patterns: a cross-sectional survey. BMC Public Health. 2012;12:164.Google Scholar
32. Barnett, DJ, Thompson, CB, Semon, NL, et al. EPPM and willingness to respond: the role of risk and efficacy communication in strengthening public health emergency response systems. Health Commun. 2014;29(6):598-609.Google Scholar
33. Errett, NA, Barnett, DJ, Thompson, CB, et al. Assessment of psychological preparedness and emergency response willingness of local public health department and hospital workers. Int J Emerg Ment Health. 2012;14(2):125-133.Google Scholar
34. Savoia, E, Massin‐Short, S, Higdon, MA, et al. A toolkit to assess Medical Reserve Corps Units’ performance. Disaster Med Public Health Prep. 2010;4(3):213-219.Google Scholar
35. Horney, JA, Markiewicz, M, Meyer, AM. Regional public health preparedness teams in North Carolina: an analysis of their structural capacity and impact on services provided. Am J Disaster Med. 2011 Mar-Apr;6(2):107-117.Google Scholar
36. Schuh, RG, Eichelberger, TR, Stebbins, S, et al. Developing a measure of local agency adaptation to emergencies: a metric. Eval Program Plann. 2012;35(4):473-480.Google Scholar
37. Potter, MA, Schuh, RG, Pomer, B, et al. The adaptive response metric: toward an all hazards tool for planning, decision support, and after‐action analytics. J Public Health Manag Pract. 2013 Sep-Oct;19(Suppl 2):S49-54. doi:10.1097/PHH.0b013e318296214c Google Scholar
38. Asch, SM, Stoto, M, Mendes, M, et al. A review of instruments assessing public health preparedness. Public Health Rep. 2005;120(5):532-542.Google Scholar
39. Kirschenbaum, A. Measuring the effectiveness of disaster management organizations. Int J Mass Emerg Disasters. 1994;22(1):75-102.Google Scholar
40. Links, JM, Schwartz, BS, Lin, S, et al. COPEWELL: a conceptual framework and system dynamics model for predicting community functioning and resilience. Disaster Med Public Health Prep. 2017 Jun 21:1-11. doi: 10.1017/dmp.2017.39 Google Scholar
41. Savoia, E, Agboola, F, Biddinger, PD. A conceptual framework to measure systems’ performance during emergency preparedness exercises. Int J Environ Res Public Health. 2014 Sep 17;11(9):9712-9722. doi: 10.3390/ijerph110909712 Google Scholar
42. Jhung, MA, Swerdlow, D, Olsen, SJ, et al. Epidemiology of 2009 pandemic influenza A (H1N1) in the United States. Clin Infect Dis. 2011;52(Suppl 1):S13-26.Google Scholar
Figure 0

Table 1 Published Preparedness and Emergency Response Research Centers (PERRCS) Studies That Developed or Assessed Public Health Emergency Preparedness and Response (PHEPR) System Performance Criteria or Metrics by Domain, 2009-2016