Local health departments (LHDs) are essential to emergency preparedness and response activities. They have statutory authority to perform key functions including community health assessments and epidemiologic investigations, enforcement of health laws and regulations, and coordination of the actions of the agencies in their jurisdictions that comprise the local public health system.Reference Scutchfield, Knight, Kelly, Bhandari and Vasilescu 1 However, preparedness also involves specialized functions in incident command, countermeasures and mitigation, mass health care delivery, and management of essential health care supply chains.Reference Nelson, Lurie, Wasserman and Zakowski 2 Moreover, effective emergency preparedness and response may require the performance of routine public health activities under unusual time pressure and resource constraints. Consequently, the ability of an LHD to perform routine public health activities under usual conditions may not predict its capacity for performing emergency preparedness and response activities.Reference Asch, Stoto, Mendes and Valdez 3
Federal, state, and local public health agencies have made substantial investments to improve the preparedness capacities and capabilities of state and local public health agencies.Reference Levi, Vitner and Segal 4 A lack of valid and reliable data collection instruments, however, has made it difficult to determine whether the investments have improved state and local public health capacities to effectively prevent, detect, or respond to public health emergencies. Although a number of instruments collect self-reported data on the preparedness of state and local public agencies, few of these instruments have been subjected to formal validity and reliability testing.Reference Asch, Stoto, Mendes and Valdez 3
Existing conceptual models of public health system performance, which are grounded in organizational sociology and industrial organization economics, stress the importance of structural characteristics of public health agencies and their relationships with other organizations in the public health system.Reference Handler, Issel and Turnock 5 – Reference Roper and Mays 7 These structural characteristics determine the capacity of the system to respond to public health threats, including statutory authority, financial and human resources, governance structures, and interorganizational relationships with both governmental and private organizations that have relevant resources and expertise.
Valid, reliable measures of LHD preparedness can be used to determine performance levels and identify performance gaps and potential sources of performance variation. Addressing these gaps and reducing unwarranted variation are critical to assuring an appropriate public health response to a variety of public health threats, such as pandemic influenza.Reference Schuh, Tony Eichelberger and Stebbins 8 To date, one critical road block to developing valid, reliable instruments to measure preparedness capacities has been the lack of consensus on a definition of LHD preparedness.Reference Nelson, Lurie, Wasserman and Zakowski 2 Further, using a structure–process–outcome framework, measurement has been limited to structure and process, with little measurement of outcomes.Reference Nelson, Lurie, Wasserman and Zakowski 2 , Reference Donabedian 9 Finally, although some preparedness measures have examined nationwide performance, most measurements and assessments have been made at the state level only.Reference Levi, Vitner and Segal 4
When viewed through the classic structure–process–outcome framework, conceptual challenges for measuring public health emergency preparedness among LHDs include a lack of widely accepted standards for preparedness and a weak evidence base linking structures and processes to outcomes. To address these limitations in measuring LHD preparedness capacities, the North Carolina Preparedness and Emergency Response Research Center developed and tested the Local Health Department Preparedness Capacities Assessment Survey (PCAS) from 2008 to 2010.
Methods
Instrument Development
The PCAS drew on elements from several instruments that offered reasonable clarity in measurement, a balance between structural and process measures, and support from prior, although limited, validity and/or reliability testing on several of the instruments. The instruments used in survey development included the following:
-
• The Public Health Preparedness and Response Capacity Inventory developed by the Centers for Disease Control and Prevention (CDC).Reference Costich and Scutchfield 10 This instrument contains 79 questions and approximately 700 subquestions that measure capacity in 6 preparedness domains, including planning and assessment, laboratory capacity, general communications and information technology, risk communication and dissemination, and education and training.Reference Lovelace, Bibeau, Gansneder, Hernandez and Cline 11
-
• The National Public Health Performance Standards Program, Local Instrument (version 2.0) developed through a partnership between the CDC and other national public health organizations. This instrument contains 27 items in the performance standard devoted to emergency preparedness, investigation, and response.Reference Scutchfield, Knight, Kelly, Bhandari and Vasilescu 1 , Reference Beaulieu and Scutchfield 12 - Reference Bhandari, Scutchfield, Charnigo, Riddell and Mays 15
-
• The CDC's Public Health Preparedness Cooperative Agreement Performance Measures. Reporting guidance includes 6 items related to detection and reporting, communication and control, and after-action improvement.Reference Savoia, Testa and Biddinger 16
-
• The Connectivity in Public Health Preparedness Measurement Tool developed to measure connectivity among preparedness-related organizations and personnel.Reference Dorn, Savoia, Testa, Stoto and Marcus 17 This instrument contains 28 items on information exchange, communication, and interaction at system, organizational, and interpersonal levels.Reference Dorn, Savoia, Testa, Stoto and Marcus 17 - Reference Hall, Moore and Shiell 19
We used a modified 4-cycle Delphi panel process to select items from each instrument to create an instrument with content and face validity.Reference Keeney, Hasson and McKenna 20 This strategy is used to gain consensus among a panel of experts through multiple rounds of questionnaires and feedback. The 4 panel members had expertise in public health preparedness research and practice. The Delphi panel process was conducted through email and a series of conference calls. From this process, we developed a public health emergency preparedness instrument that included 116 items.
The items were organized into a web-based, self-administered instrument for pilot testing with diverse LHDs. Pilot testing requested each LHD to complete 1 survey by involving key administrative and preparedness staff. Following survey completion, cognitive interviews with staff who completed the survey were conducted to (1) explore the process that individuals and LHDs used to complete the instrument, (2) obtain feedback on instrument content and structure, and (3) identify items on which individuals within an agency disagreed.
The research team revised the instrument based on pilot testing and cognitive interviews, as well as a study on health department response to the H1N1 epidemic.Reference Mays, Wayne and Davis 21 The final instrument contains 58 questions with 211 subquestions. The questions ask LHDs to report if they have a specific capacity, with related subquestions to determine whether they have specific elements associated with the particular capacity. See Table 1 for a summary of instrument domains and description of capacities measured.
In 2010, the research team fielded the final instrument with the 85 North Carolina (NC) LHDs and 247 comparison health departments, which were identified using a propensity score matching methodology (referred to here as survey wave 1) as part of a study to examine differences between a state with LHDs exposed to an accreditation program (NC) and those in states without an accreditation program. Approximately 1900 local public health agencies were eligible for possible matching with NC agencies. Matching was based on data from the 2008 National Association of County and City Health Officials National Profile of Local Health Departments survey, containing measures of the organizational, financial, and operational characteristics of LHDs and their service areas. Propensity scores were estimated from a logistic regression equation that modeled the likelihood of exposure as a function of 14 public health agency characteristics and community or system characteristics. This model's empirical specification reflected the approach used in previous studies of public health system performance, including controls for public health agency staffing levels, scope of services delivered, annual agency expenditures per capita, population size served, socioeconomic characteristics of the community, and other health resources within the community.Reference Mays, Halverson, Baker, Stevens and Vann 14 , Reference Mays, McHugh and Shim 22 The nearest neighbor method was used to pair each NC LHD with a comparison agency from another state (1:1, best match), with random selection used to choose among comparison LHDs having the same propensity score.Reference Austin, Grootendorst and Anderson 23 , Reference McShane, Midthune, Dorgan, Freedman and Carroll 24 To ensure adequate response rate, additional comparison LHDs were included in the sample.
The instrument was programmed for web-based administration with an option for paper completion. Letters of invitation containing information about the study's purpose and instructions to complete the survey were sent to the director and emergency preparedness coordinators of each selected LHD. Postcard and telephone reminders were sent to nonresponding LHDs to achieve a targeted response rate of 80%.
Analysis
Pilot test data were analyzed using exploratory factor analysis to examine the underlying dimensions of preparedness that are reflected in the specific preparedness activities measured. A varimax (orthogonal) rotation method was used to ensure the most parsimonious and robust solution. However, we tested the sensitivity of results to this assumption by recomputing factor loadings using promax (oblique) rotation. Guided by this analysis, we then constructed a principal factor composite variable for each underlying preparedness domain (factor) by computing the unweighted mean and standard deviation of the subset of activity measures that were correlated in each dimension. Reliability measures include kappa statistics with associated z scores, and probabilities were calculated to examine rater agreement within items in domains and agencies.Reference Viera and Garrett 25 , Reference Harris, Beatty and Barbero 26 With the larger sample of survey wave 1 data, we assessed the internal correlation of each domain through interclass correlation coefficients and inter-rater reliability.
Results
Eleven public health agencies in 3 states (Missouri, Kentucky, and Tennessee) completed the pilot test survey and participated in cognitive interviews. Survey respondents and cognitive interview participants included 12 emergency preparedness personnel, 6 health directors, 4 epidemiologists, and 6 individuals in other job classifications. Themes from cognitive interviews emphasized that emergency preparedness is a team effort in which different health department personnel have a different working knowledge of facets of preparedness. Health directors have a broad knowledge of emergency preparedness, epidemiologists have in-depth knowledge in a certain area, and preparedness coordinators have the most all-around knowledge. Thus, the research team concluded that the final instrument should be completed by multiple health department staff and should be provided to health departments in a way to facilitate completion by multiple staff.
The larger survey wave 1 includes 264 respondents (RR = 79.3%), a majority (61.6%) of whom are governed by a local board of health. The sample is evenly distributed between LHDs within metropolitan statistical areas (MSAs) (51.7%) versus those in non-MSAs (48.3%). LHDs responding to the survey reported an average of 96 full-time employees (FTEs) (median = 54), with some LHDs providing services with as few as 2 FTEs and others with upward of 1025 FTEs in their department. These LHDs and their FTEs serve between 4000 and 1 484 645 residents in their respective counties, with a median population of 54 261 (mean = 109 803). The percentage of residents within these counties living at or below the poverty line ranges from 2.9% to upward of 26.5%, with an average of 12.7%. On average, responding LHDs spend $68.86 per capita (adjusted expenditures) (range: $0.68-$358.97, median = $53.12). There were no significant differences in the characteristics of responding LHDs when compared with the total sample based on a Welch 2-sample t test between total sample and respondents.
Psychometric Testing
Based on pilot test data, factor analysis was used to identify 8 initial domains measuring capacities and capacity-related elements; 7 of these domains performed well on the psychometric measures (Cronbach α >0.6) with sufficient variation in mean scores to differentiate health departments. We identified these domains as communications, surveillance and investigation, plans and protocols, workforce and volunteers, legal infrastructure, incident command, and exercises and events. The eighth domain, partnerships, had insufficient variation to merit measurement. To the 7 domains, we revised and increased the number of items to assess corrective actions and quality improvement activities. Table 1 outlines and describes the domains and capacities measured within them.
For 3 of the domains—communication, plans and protocols, events and exercises—kappa statistics ranged from 0.67 to 0.79, indicating substantial agreement among raters with statistically significant z scores (Table 2). For surveillance and investigation and legal infrastructure domains, kappa statistics 0.48 and 0.39, respectively, indicated moderate to fair agreement with statistically significant z scores. For the workforce and volunteers domain, the kappa statistic of 0.14 indicated slight agreement, although the z score is not statistically significant. Kappa statistics were also calculated to examine the agreement of raters within agencies. Within agencies, kappa values ranged from 0.51 to 0.85, indicating moderate to strong agreement among raters, with statistically significant z scores.
Table 3 presents results of internal consistency and general reliability tests of PCAS preparedness measures from the survey wave 1 sample. The number of items in each domain ranged from 5 to 33. The Cronbach α coefficients ranged from 0.605 for legal infrastructure to 0.929 for corrective action. Overall, the resulting α values indicated a high level of internal consistency among items. In addition, Table 3 presents the average inter-item covariance among the items, which show a very low variance among the measures.
Table 4 presents the survey wave 1 means and standards deviations for each of the domains, calculated from a simple additive index measure of corresponding preparedness capacities. The resulting means ranged from 0.41 for workforce and volunteers to 0.94 for events and exercises. The mean score for incident command was also quite high (0.72). Mean scores were in the mid-range (0.41-0.56) for 3 domains and moderately high (0.61-0.72) for 4 domains, indicating that variation in the sample on these measures is sufficient to allow detection of variation on key domains and improvement over time.
Discussion
We used multiple processes to construct a valid and reliable survey instrument to measure preparedness capacities of LHDs. Processes included identifying appropriate instruments from which to draw items, identifying critical practices through engaging experts in a Delphi process, and conducting a thorough pilot test of the draft instrument with cognitive interviewing. The resulting instrument demonstrated face and content validity, along with strong internal consistency and general reliability.
The final 8 survey domains reflected LHD standard preparedness practice.Reference Nelson, Lurie, Wasserman and Zakowski 2 , Reference Stoto 27 Survey wave 1 LHDs had high mean scores on events and exercises and incident command. This finding may have reflected funding and standard practice during the years preceding the survey. Mean scores on the remaining domains were moderate, indicating that there is variation in practice. Measurement with report feedback to LHDs could provide them with improvement opportunities. We provided customized reports to each responding LHD designed to facilitate LHD benchmarking and improvement processes. Several LHDs have reported that they found these customized reports to be very useful for these purposes as well as for strategic planning and workforce development.
Limitations
Several limitations to these results are worth noting. First, as with most survey research, data are self-reported and may contain potential response bias. Some authors have advocated verifying self-report through a site visit or having external observers measure performance,Reference Lurie, Wasserman and Stoto 28 yet the resources needed to verify self-reports on a national scale could be prohibitive. Second, given the simple additive nature of the domains and the diversity of capacities measured in them, findings can only be meaningfully presented at the domain level. With strong interest in index measures, it is important for future discussions to more clearly communicate the extent to which creating a single measure or index of indicators can truly measure the entire construct of preparedness. Third, as designed, these capacities are implicitly weighted equally. However, given the lack of evidence connecting capacities, capabilities, and performance, there is a persistent debate surrounding the measurement and assessment relationships in public health preparedness.Reference Stoto 27 In the present discussion, the focus is directed toward the development process associated with the PCAS instrument. In the perpetual effort to introduce valid and reliable measurement instruments, this study used the rigorous analytic methods necessary to support the measurement of public health preparedness.
Conclusion
Results from the survey wave 1 participants in this study add to the few that have assessed public health preparedness and response and support the assertion that wide variations exist in capacity and practice across communities.Reference Lurie, Wasserman and Stoto 28 The causes of variation and deficiency in public health preparedness are likely to parallel those of undesirable variation in routine public health system performance, although with some key differences. The evidence about what constitutes effective public health preparedness is extremely thin, which means that professional uncertainty is greater in this arena than in routine practice.Reference Asch, Stoto, Mendes and Valdez 3 Similarly, a number of different performance standards for preparedness have been developed by various agencies and organizations, resulting in overlapping and sometimes inconsistent recommendations and program requirements. At the same time, heterogeneity in the composition and structure of public health systems is an important source of variation in preparedness, as in other aspects of public health practice.Reference Duncan, Ginter and Rucks 29
In several areas of public health, evidence-based or consensus-based guidelines, including preparedness, do not yet exist, suggesting a need for more research to identify effective practices.Reference Green 30 , Reference Scutchfield, Marks, Perez and Mays 31 Studies suggest that professionals may not be aware of existing guidelines or they may lack the financial resources, staff, or legal authority needed to adhere to the guidelines.Reference Brownson, Boehmer, Haire-Joshu and Dreisinger 32 - Reference Nanney, Haire-Joshu, Brownson, Kostelc, Stephen and Elliott 35 Since 2011, state and local health departments have become increasingly aware and educated on the Public Health Preparedness Capabilities: National Standards for State and Local Planning, released by the CDC, which consists of 15 preparedness capabilities and associated functions intended to assist in strategic planning (http://www.cdc.gov/phpr/capabilities/ Accessed January 28, 2013). Our comparison of the present PCAS measures and CDC capabilities yielded an overlap slightly greater than 60%.Reference Davis, Bevc and Mays 36 While there is a present shift from capacities to capabilities, the elements underpinning the present public health emergency preparedness capabilities reflect the essential and vital capacities for local and state health departments to effectively build and maintain their preparedness capabilities. In a dynamic policy environment, it is important to maintain a more fundamental understanding of the capacities and elements that contribute to levels of local preparedness. This resulting instrument and the process used to generate it will be a useful tool to help federal, state, and local health departments determine how well public health agencies are performing on preparedness measures and identify opportunities for future preparedness improvements.
Acknowledgments
John Wayne, PhD, Carol Gunther-Mohr, MA, and Edward Baker, MD, MPH, assisted in the development, implementation, and analysis of the Preparedness Capacities Assessment Survey.
Funding and Support
The research was carried out by the North Carolina Preparedness and Emergency Response Research Center at the University of North Carolina at Chapel Hill's Gillings School of Global Public Health and was supported by the Centers for Disease Control and Prevention (CDC) Grant 1P01TP000296.
Disclaimer
The contents are solely the responsibility of the authors and do not necessarily represent the official views of the CDC. Additional information can be found at http://cphp.sph.unc.edu/ncperrc.
This study was reviewed and approved by the institutional review boards at the University of North Carolina at Chapel Hill and the University of Arkansas for Medical Sciences.