Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-11T08:45:30.377Z Has data issue: false hasContentIssue false

Improving Public Health Preparedness Capacity Measurement: Development of the Local Health Department Preparedness Capacities Assessment Survey

Published online by Cambridge University Press:  16 December 2013

Mary V. Davis*
Affiliation:
University of North Carolina at Chapel Hill Gillings School of Global Public Health, North Carolina Preparedness and Emergency Response Research Center, Chapel Hill, North Carolina
Glen P. Mays
Affiliation:
University of Arkansas for Medical Sciences College of Public Health, Little Rock, Arkansas
James Bellamy
Affiliation:
University of Arkansas for Medical Sciences College of Public Health, Little Rock, Arkansas
Christine A. Bevc
Affiliation:
University of North Carolina at Chapel Hill Gillings School of Global Public Health, North Carolina Preparedness and Emergency Response Research Center, Chapel Hill, North Carolina
Cammie Marti
Affiliation:
University of Arkansas for Medical Sciences College of Public Health, Little Rock, Arkansas
*
Address correspondence and reprint requests to Mary V. Davis, DrPH, MSPH, University of North Carolina at Chapel Hill Gillings School of Global Public Health, North Carolina Preparedness and Emergency Response Research Center, Campus Box 8165, Chapel Hill, NC 27599 (e-mail Mary_Davis@unc.edu).
Rights & Permissions [Opens in a new window]

Abstract

Objective

To address limitations in measuring the preparedness capacities of health departments, we developed and tested the Local Health Department Preparedness Capacities Assessment Survey (PCAS).

Methods

Preexisting instruments and a modified 4-cycle Delphi panel process were used to select instrument items. Pilot test data were analyzed using exploratory factor analysis. Kappa statistics were calculated to examine rater agreement within items. The final instrument was fielded with 85 North Carolina health departments and a national matched comparison group of 248 health departments.

Results

Factor analysis identified 8 initial domains: communications, surveillance and investigation, plans and protocols, workforce and volunteers, legal infrastructure, incident command, exercises and events, and corrective action. Kappa statistics and z scores indicated substantial to moderate agreement among respondents in 7 domains. Cronbach α coefficients ranged from 0.605 for legal infrastructure to 0.929 for corrective action. Mean scores and standard deviations were also calculated for each domain and ranged from 0.41 to 0.72, indicating sufficient variation in the sample to detect changes over time.

Conclusion

The PCAS is a useful tool to determine how well health departments are performing on preparedness measures and identify opportunities for future preparedness improvements. Future survey implementation will incorporate recent Centers for Disease Control and Prevention's Public Health Preparedness Capabilities: National Standards for State and Local Planning. (Disaster Med Public Health Preparedness. 2013;7:578–584)

Type
Original Research
Copyright
Copyright © Society for Disaster Medicine and Public Health, Inc. 2013 

Local health departments (LHDs) are essential to emergency preparedness and response activities. They have statutory authority to perform key functions including community health assessments and epidemiologic investigations, enforcement of health laws and regulations, and coordination of the actions of the agencies in their jurisdictions that comprise the local public health system.Reference Scutchfield, Knight, Kelly, Bhandari and Vasilescu 1 However, preparedness also involves specialized functions in incident command, countermeasures and mitigation, mass health care delivery, and management of essential health care supply chains.Reference Nelson, Lurie, Wasserman and Zakowski 2 Moreover, effective emergency preparedness and response may require the performance of routine public health activities under unusual time pressure and resource constraints. Consequently, the ability of an LHD to perform routine public health activities under usual conditions may not predict its capacity for performing emergency preparedness and response activities.Reference Asch, Stoto, Mendes and Valdez 3

Federal, state, and local public health agencies have made substantial investments to improve the preparedness capacities and capabilities of state and local public health agencies.Reference Levi, Vitner and Segal 4 A lack of valid and reliable data collection instruments, however, has made it difficult to determine whether the investments have improved state and local public health capacities to effectively prevent, detect, or respond to public health emergencies. Although a number of instruments collect self-reported data on the preparedness of state and local public agencies, few of these instruments have been subjected to formal validity and reliability testing.Reference Asch, Stoto, Mendes and Valdez 3

Existing conceptual models of public health system performance, which are grounded in organizational sociology and industrial organization economics, stress the importance of structural characteristics of public health agencies and their relationships with other organizations in the public health system.Reference Handler, Issel and Turnock 5 Reference Roper and Mays 7 These structural characteristics determine the capacity of the system to respond to public health threats, including statutory authority, financial and human resources, governance structures, and interorganizational relationships with both governmental and private organizations that have relevant resources and expertise.

Valid, reliable measures of LHD preparedness can be used to determine performance levels and identify performance gaps and potential sources of performance variation. Addressing these gaps and reducing unwarranted variation are critical to assuring an appropriate public health response to a variety of public health threats, such as pandemic influenza.Reference Schuh, Tony Eichelberger and Stebbins 8 To date, one critical road block to developing valid, reliable instruments to measure preparedness capacities has been the lack of consensus on a definition of LHD preparedness.Reference Nelson, Lurie, Wasserman and Zakowski 2 Further, using a structure–process–outcome framework, measurement has been limited to structure and process, with little measurement of outcomes.Reference Nelson, Lurie, Wasserman and Zakowski 2 , Reference Donabedian 9 Finally, although some preparedness measures have examined nationwide performance, most measurements and assessments have been made at the state level only.Reference Levi, Vitner and Segal 4

When viewed through the classic structure–process–outcome framework, conceptual challenges for measuring public health emergency preparedness among LHDs include a lack of widely accepted standards for preparedness and a weak evidence base linking structures and processes to outcomes. To address these limitations in measuring LHD preparedness capacities, the North Carolina Preparedness and Emergency Response Research Center developed and tested the Local Health Department Preparedness Capacities Assessment Survey (PCAS) from 2008 to 2010.

Methods

Instrument Development

The PCAS drew on elements from several instruments that offered reasonable clarity in measurement, a balance between structural and process measures, and support from prior, although limited, validity and/or reliability testing on several of the instruments. The instruments used in survey development included the following:

We used a modified 4-cycle Delphi panel process to select items from each instrument to create an instrument with content and face validity.Reference Keeney, Hasson and McKenna 20 This strategy is used to gain consensus among a panel of experts through multiple rounds of questionnaires and feedback. The 4 panel members had expertise in public health preparedness research and practice. The Delphi panel process was conducted through email and a series of conference calls. From this process, we developed a public health emergency preparedness instrument that included 116 items.

The items were organized into a web-based, self-administered instrument for pilot testing with diverse LHDs. Pilot testing requested each LHD to complete 1 survey by involving key administrative and preparedness staff. Following survey completion, cognitive interviews with staff who completed the survey were conducted to (1) explore the process that individuals and LHDs used to complete the instrument, (2) obtain feedback on instrument content and structure, and (3) identify items on which individuals within an agency disagreed.

The research team revised the instrument based on pilot testing and cognitive interviews, as well as a study on health department response to the H1N1 epidemic.Reference Mays, Wayne and Davis 21 The final instrument contains 58 questions with 211 subquestions. The questions ask LHDs to report if they have a specific capacity, with related subquestions to determine whether they have specific elements associated with the particular capacity. See Table 1 for a summary of instrument domains and description of capacities measured.

Table 1 Summary of Preparedness Capacities Assessment Survey Domains and Capacities

In 2010, the research team fielded the final instrument with the 85 North Carolina (NC) LHDs and 247 comparison health departments, which were identified using a propensity score matching methodology (referred to here as survey wave 1) as part of a study to examine differences between a state with LHDs exposed to an accreditation program (NC) and those in states without an accreditation program. Approximately 1900 local public health agencies were eligible for possible matching with NC agencies. Matching was based on data from the 2008 National Association of County and City Health Officials National Profile of Local Health Departments survey, containing measures of the organizational, financial, and operational characteristics of LHDs and their service areas. Propensity scores were estimated from a logistic regression equation that modeled the likelihood of exposure as a function of 14 public health agency characteristics and community or system characteristics. This model's empirical specification reflected the approach used in previous studies of public health system performance, including controls for public health agency staffing levels, scope of services delivered, annual agency expenditures per capita, population size served, socioeconomic characteristics of the community, and other health resources within the community.Reference Mays, Halverson, Baker, Stevens and Vann 14 , Reference Mays, McHugh and Shim 22 The nearest neighbor method was used to pair each NC LHD with a comparison agency from another state (1:1, best match), with random selection used to choose among comparison LHDs having the same propensity score.Reference Austin, Grootendorst and Anderson 23 , Reference McShane, Midthune, Dorgan, Freedman and Carroll 24 To ensure adequate response rate, additional comparison LHDs were included in the sample.

The instrument was programmed for web-based administration with an option for paper completion. Letters of invitation containing information about the study's purpose and instructions to complete the survey were sent to the director and emergency preparedness coordinators of each selected LHD. Postcard and telephone reminders were sent to nonresponding LHDs to achieve a targeted response rate of 80%.

Analysis

Pilot test data were analyzed using exploratory factor analysis to examine the underlying dimensions of preparedness that are reflected in the specific preparedness activities measured. A varimax (orthogonal) rotation method was used to ensure the most parsimonious and robust solution. However, we tested the sensitivity of results to this assumption by recomputing factor loadings using promax (oblique) rotation. Guided by this analysis, we then constructed a principal factor composite variable for each underlying preparedness domain (factor) by computing the unweighted mean and standard deviation of the subset of activity measures that were correlated in each dimension. Reliability measures include kappa statistics with associated z scores, and probabilities were calculated to examine rater agreement within items in domains and agencies.Reference Viera and Garrett 25 , Reference Harris, Beatty and Barbero 26 With the larger sample of survey wave 1 data, we assessed the internal correlation of each domain through interclass correlation coefficients and inter-rater reliability.

Results

Eleven public health agencies in 3 states (Missouri, Kentucky, and Tennessee) completed the pilot test survey and participated in cognitive interviews. Survey respondents and cognitive interview participants included 12 emergency preparedness personnel, 6 health directors, 4 epidemiologists, and 6 individuals in other job classifications. Themes from cognitive interviews emphasized that emergency preparedness is a team effort in which different health department personnel have a different working knowledge of facets of preparedness. Health directors have a broad knowledge of emergency preparedness, epidemiologists have in-depth knowledge in a certain area, and preparedness coordinators have the most all-around knowledge. Thus, the research team concluded that the final instrument should be completed by multiple health department staff and should be provided to health departments in a way to facilitate completion by multiple staff.

The larger survey wave 1 includes 264 respondents (RR = 79.3%), a majority (61.6%) of whom are governed by a local board of health. The sample is evenly distributed between LHDs within metropolitan statistical areas (MSAs) (51.7%) versus those in non-MSAs (48.3%). LHDs responding to the survey reported an average of 96 full-time employees (FTEs) (median = 54), with some LHDs providing services with as few as 2 FTEs and others with upward of 1025 FTEs in their department. These LHDs and their FTEs serve between 4000 and 1 484 645 residents in their respective counties, with a median population of 54 261 (mean = 109 803). The percentage of residents within these counties living at or below the poverty line ranges from 2.9% to upward of 26.5%, with an average of 12.7%. On average, responding LHDs spend $68.86 per capita (adjusted expenditures) (range: $0.68-$358.97, median = $53.12). There were no significant differences in the characteristics of responding LHDs when compared with the total sample based on a Welch 2-sample t test between total sample and respondents.

Psychometric Testing

Based on pilot test data, factor analysis was used to identify 8 initial domains measuring capacities and capacity-related elements; 7 of these domains performed well on the psychometric measures (Cronbach α >0.6) with sufficient variation in mean scores to differentiate health departments. We identified these domains as communications, surveillance and investigation, plans and protocols, workforce and volunteers, legal infrastructure, incident command, and exercises and events. The eighth domain, partnerships, had insufficient variation to merit measurement. To the 7 domains, we revised and increased the number of items to assess corrective actions and quality improvement activities. Table 1 outlines and describes the domains and capacities measured within them.

For 3 of the domains—communication, plans and protocols, events and exercises—kappa statistics ranged from 0.67 to 0.79, indicating substantial agreement among raters with statistically significant z scores (Table 2). For surveillance and investigation and legal infrastructure domains, kappa statistics 0.48 and 0.39, respectively, indicated moderate to fair agreement with statistically significant z scores. For the workforce and volunteers domain, the kappa statistic of 0.14 indicated slight agreement, although the z score is not statistically significant. Kappa statistics were also calculated to examine the agreement of raters within agencies. Within agencies, kappa values ranged from 0.51 to 0.85, indicating moderate to strong agreement among raters, with statistically significant z scores.

Table 2 Pilot Test Inter-Rater Reliability of Preparedness Capacities Assessment Survey Preparedness Measures

Table 3 presents results of internal consistency and general reliability tests of PCAS preparedness measures from the survey wave 1 sample. The number of items in each domain ranged from 5 to 33. The Cronbach α coefficients ranged from 0.605 for legal infrastructure to 0.929 for corrective action. Overall, the resulting α values indicated a high level of internal consistency among items. In addition, Table 3 presents the average inter-item covariance among the items, which show a very low variance among the measures.

Table 3 Internal Consistency Reliability of Preparedness Capacities Assessment Survey Preparedness Measures, Survey Wave 1

Table 4 presents the survey wave 1 means and standards deviations for each of the domains, calculated from a simple additive index measure of corresponding preparedness capacities. The resulting means ranged from 0.41 for workforce and volunteers to 0.94 for events and exercises. The mean score for incident command was also quite high (0.72). Mean scores were in the mid-range (0.41-0.56) for 3 domains and moderately high (0.61-0.72) for 4 domains, indicating that variation in the sample on these measures is sufficient to allow detection of variation on key domains and improvement over time.

Table 4 Preparedness Capacities Assessment Survey Wave 1 Validation Study

Discussion

We used multiple processes to construct a valid and reliable survey instrument to measure preparedness capacities of LHDs. Processes included identifying appropriate instruments from which to draw items, identifying critical practices through engaging experts in a Delphi process, and conducting a thorough pilot test of the draft instrument with cognitive interviewing. The resulting instrument demonstrated face and content validity, along with strong internal consistency and general reliability.

The final 8 survey domains reflected LHD standard preparedness practice.Reference Nelson, Lurie, Wasserman and Zakowski 2 , Reference Stoto 27 Survey wave 1 LHDs had high mean scores on events and exercises and incident command. This finding may have reflected funding and standard practice during the years preceding the survey. Mean scores on the remaining domains were moderate, indicating that there is variation in practice. Measurement with report feedback to LHDs could provide them with improvement opportunities. We provided customized reports to each responding LHD designed to facilitate LHD benchmarking and improvement processes. Several LHDs have reported that they found these customized reports to be very useful for these purposes as well as for strategic planning and workforce development.

Limitations

Several limitations to these results are worth noting. First, as with most survey research, data are self-reported and may contain potential response bias. Some authors have advocated verifying self-report through a site visit or having external observers measure performance,Reference Lurie, Wasserman and Stoto 28 yet the resources needed to verify self-reports on a national scale could be prohibitive. Second, given the simple additive nature of the domains and the diversity of capacities measured in them, findings can only be meaningfully presented at the domain level. With strong interest in index measures, it is important for future discussions to more clearly communicate the extent to which creating a single measure or index of indicators can truly measure the entire construct of preparedness. Third, as designed, these capacities are implicitly weighted equally. However, given the lack of evidence connecting capacities, capabilities, and performance, there is a persistent debate surrounding the measurement and assessment relationships in public health preparedness.Reference Stoto 27 In the present discussion, the focus is directed toward the development process associated with the PCAS instrument. In the perpetual effort to introduce valid and reliable measurement instruments, this study used the rigorous analytic methods necessary to support the measurement of public health preparedness.

Conclusion

Results from the survey wave 1 participants in this study add to the few that have assessed public health preparedness and response and support the assertion that wide variations exist in capacity and practice across communities.Reference Lurie, Wasserman and Stoto 28 The causes of variation and deficiency in public health preparedness are likely to parallel those of undesirable variation in routine public health system performance, although with some key differences. The evidence about what constitutes effective public health preparedness is extremely thin, which means that professional uncertainty is greater in this arena than in routine practice.Reference Asch, Stoto, Mendes and Valdez 3 Similarly, a number of different performance standards for preparedness have been developed by various agencies and organizations, resulting in overlapping and sometimes inconsistent recommendations and program requirements. At the same time, heterogeneity in the composition and structure of public health systems is an important source of variation in preparedness, as in other aspects of public health practice.Reference Duncan, Ginter and Rucks 29

In several areas of public health, evidence-based or consensus-based guidelines, including preparedness, do not yet exist, suggesting a need for more research to identify effective practices.Reference Green 30 , Reference Scutchfield, Marks, Perez and Mays 31 Studies suggest that professionals may not be aware of existing guidelines or they may lack the financial resources, staff, or legal authority needed to adhere to the guidelines.Reference Brownson, Boehmer, Haire-Joshu and Dreisinger 32 - Reference Nanney, Haire-Joshu, Brownson, Kostelc, Stephen and Elliott 35 Since 2011, state and local health departments have become increasingly aware and educated on the Public Health Preparedness Capabilities: National Standards for State and Local Planning, released by the CDC, which consists of 15 preparedness capabilities and associated functions intended to assist in strategic planning (http://www.cdc.gov/phpr/capabilities/ Accessed January 28, 2013). Our comparison of the present PCAS measures and CDC capabilities yielded an overlap slightly greater than 60%.Reference Davis, Bevc and Mays 36 While there is a present shift from capacities to capabilities, the elements underpinning the present public health emergency preparedness capabilities reflect the essential and vital capacities for local and state health departments to effectively build and maintain their preparedness capabilities. In a dynamic policy environment, it is important to maintain a more fundamental understanding of the capacities and elements that contribute to levels of local preparedness. This resulting instrument and the process used to generate it will be a useful tool to help federal, state, and local health departments determine how well public health agencies are performing on preparedness measures and identify opportunities for future preparedness improvements.

Acknowledgments

John Wayne, PhD, Carol Gunther-Mohr, MA, and Edward Baker, MD, MPH, assisted in the development, implementation, and analysis of the Preparedness Capacities Assessment Survey.

Funding and Support

The research was carried out by the North Carolina Preparedness and Emergency Response Research Center at the University of North Carolina at Chapel Hill's Gillings School of Global Public Health and was supported by the Centers for Disease Control and Prevention (CDC) Grant 1P01TP000296.

Disclaimer

The contents are solely the responsibility of the authors and do not necessarily represent the official views of the CDC. Additional information can be found at http://cphp.sph.unc.edu/ncperrc.

This study was reviewed and approved by the institutional review boards at the University of North Carolina at Chapel Hill and the University of Arkansas for Medical Sciences.

References

1. Scutchfield, FD, Knight, EA, Kelly, AV, Bhandari, MW, Vasilescu, IP. Local public health agency capacity and its relationship to public health system performance. J Public Health Manag Practice. 2004;10(3):204-215.CrossRefGoogle ScholarPubMed
2. Nelson, C, Lurie, N, Wasserman, J, Zakowski, S. Conceptualizing and defining public health emergency preparedness. Am J Public Health. 2007;97(suppl 1):S9-S11.CrossRefGoogle ScholarPubMed
3. Asch, SM, Stoto, M, Mendes, M, Valdez, R, etal. A review of instruments assessing public health preparedness. Public Health Rep. 2005;120(5):532-542.CrossRefGoogle ScholarPubMed
4. Levi, J, Vitner, S, Segal, LM. Ready Or Not?: Protecting the Public's Health from Diseases, Disasters, and Bioterrorism, 2009. Princeton, NJ: Robert Wood Johnson Foundation; December 9, 2009.Google Scholar
5. Handler, A, Issel, M, Turnock, B. A conceptual framework to measure performance of the public health system. Am J Public Health. 2001;91(8):1235-1239.CrossRefGoogle ScholarPubMed
6. Mays, GP, Halverson, PK. Conceptual and methodological issues in public health performance measurement: results from a computer-assisted expert panel process. J Public Health Manag Practice. 2000;6(5):59-65.CrossRefGoogle ScholarPubMed
7. Roper, WL, Mays, GP. Performance measurement in public health: conceptual and methodological issues in building the science base. J Public Health Manag Practice. 2000;6(5):66-77.Google Scholar
8. Schuh, RG, Tony Eichelberger, R, Stebbins, S, etal. Developing a measure of local agency adaptation to emergencies: a metric. Eval Program Plan. 2012;35(4):473-480.Google Scholar
9. Donabedian, A. Evaluating the quality of medical care, 1966. Milbank Q. 2005;83(4):691-729.Google Scholar
10. Costich, JF, Scutchfield, FD. Public health preparedness and response capacity inventory validity study. J Public Health Manag Practice. 2004;10(3):225-233.CrossRefGoogle ScholarPubMed
11. Lovelace, K, Bibeau, D, Gansneder, B, Hernandez, E, Cline, JS. All-hazards preparedness in an era of bioterrorism funding. J Public Health Manag Practice. 2007;13(5):465-468.CrossRefGoogle Scholar
12. Beaulieu, J, Scutchfield, FD. Assessment of validity of the national public health performance standards: the local public health performance assessment instrument. Public Health Rep. 2002;117:28-36.Google Scholar
13. Beaulieu, J, Scutchfield, FD, Kelly, A. Content and criterion validity evaluation of National Public Health Performance Standards measurement instruments. Public Health Rep. 2003;118(6):508-517.CrossRefGoogle ScholarPubMed
14. Mays, GP, Halverson, PK, Baker, EL, Stevens, R, Vann, JJ. Availability and perceived effectiveness of public health activities in the nation's most populous communities. Am J Public Health. 2004;94(6):1019-1026.Google Scholar
15. Bhandari, MW, Scutchfield, FD, Charnigo, R, Riddell, MC, Mays, GP. New data, same story? Revisiting studies on the relationship of local public health systems characteristics to public health performance. J Public Health Manag Practice. 2010;16(2):110-117.Google Scholar
16. Savoia, E, Testa, MA, Biddinger, PD, etal. Assessing Public Health Capabilities during emergency preparedness tabletop exercises: reliability and validity of a measurement tool. Public Health Rep. 2009;124:138-148.CrossRefGoogle ScholarPubMed
17. Dorn, BC, Savoia, E, Testa, MA, Stoto, MA, Marcus, LJ. Development of a survey instrument to measure connectivity to evaluate national public health preparedness and response performance. Public Health Rep. 2007;122(3):329-338.Google Scholar
18. Hall, JN, Moore, S, Shiell, A. Assessing the congruence between perceived connectivity and network centrality measures specific to pandemic influenza preparedness in Alberta. BMC Public Health. 2010;10:124.Google Scholar
19. Hall, JN, Moore, S, Shiell, A. The influence of organizational jurisdiction, organizational attributes, and training measures on perceptions of public health preparedness in Alberta. Int J Public Health. 2012;57:159-166.CrossRefGoogle ScholarPubMed
20. Keeney, S, Hasson, F, McKenna, HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud. 2001;38(2):195-200.CrossRefGoogle ScholarPubMed
21. Mays, GP, Wayne, JB, Davis, MV, etal. Variation in local public health response to the 2009 H1N1 outbreak: comparative analyses from North Carolina. Paper presented at the American Public Health Association 2009 Annual Meeting; November 7-11, 2009; Philadelphia, PA.Google Scholar
22. Mays, GP, McHugh, MC, Shim, K, etal. Institutional and economic determinants of public health system performance. Am J Public Health. 2006;96(3):523-531.CrossRefGoogle ScholarPubMed
23. Austin, PC, Grootendorst, P, Anderson, GM. A comparison of the ability of different propensity score models to balance measured variables between treated and untreated subjects: a Monte Carlo study. Stat Med. 2006;26(4):734-753.CrossRefGoogle Scholar
24. McShane, LM, Midthune, DN, Dorgan, JF, Freedman, LS, Carroll, RJ. Covariate measurement error adjustment for matched case-control studies. Biometrics. 2001;57(1):62-73.CrossRefGoogle ScholarPubMed
25. Viera, AJ, Garrett, JM. Understanding interobserver agreement: the kappa statistic. Fam Med. 2005;37(5):360-363.Google Scholar
26. Harris, JK, Beatty, K, Barbero, C, etal. Methods in public health services and systems research: a systematic review. Am J Prev Med. 2012;42(5 suppl 1):S42-S57.CrossRefGoogle ScholarPubMed
27. Stoto, M. Measuring and assessing public health emergency preparedness. J Public Health Manag Prac. 2013;19(suppl 2):S16-S21.Google Scholar
28. Lurie, N, Wasserman, J, Stoto, M, etal. Local variation in public health preparedness: lessons from California. Health Aff (Millwood). 2004;23(4):291-291.Google Scholar
29. Duncan, W, Ginter, PM, Rucks, AC, etal. Organizing emergency preparedness within United States public health departments. Public Health. 2007;121(4):241-250.CrossRefGoogle ScholarPubMed
30. Green, LW. Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? Am J Public Health. 2006;96(3):406-409.CrossRefGoogle ScholarPubMed
31. Scutchfield, FD, Marks, JS, Perez, DJ, Mays, GP. Public health services and systems research. Am J Prevent Med. 2007;33(2):169-171.CrossRefGoogle ScholarPubMed
32. Brownson, RC, Boehmer, TK, Haire-Joshu, D, Dreisinger, ML. Patterns of childhood obesity prevention legislation in the United States. Prev Chronic Dis. 2007;4(3):1-11.Google Scholar
33. Brownson, RC, Ballew, P, Dieffenderfer, B, etal. Evidence-based interventions to promote physical activity, what contributes to dissemination by state health departments. Am J Prev Med. 2007;33(1):66-78.CrossRefGoogle ScholarPubMed
34. Brownson, RC, Ballew, P, Brown, KL, etal. The effect of disseminating evidence-based interventions that promote physical activity to health departments. Am J Public Health. 2007;97(10):1900-1907.Google Scholar
35. Nanney, MS, Haire-Joshu, D, Brownson, RC, Kostelc, J, Stephen, M, Elliott, M. Awareness and adoption of a nationally disseminated dietary curriculum. Am J Health Behav. 2007;31(1):64-73.CrossRefGoogle ScholarPubMed
36. Davis, MV, Bevc, CA, Mays, GP. P-CAS: a preliminary metric and baseline for preparedness capacity and capabilities. Presentation at the Public Health Preparedness Summit; February 21-24, 2012; Anaheim, CA.Google Scholar
Figure 0

Table 1 Summary of Preparedness Capacities Assessment Survey Domains and Capacities

Figure 1

Table 2 Pilot Test Inter-Rater Reliability of Preparedness Capacities Assessment Survey Preparedness Measures

Figure 2

Table 3 Internal Consistency Reliability of Preparedness Capacities Assessment Survey Preparedness Measures, Survey Wave 1

Figure 3

Table 4 Preparedness Capacities Assessment Survey Wave 1 Validation Study