Hostname: page-component-7b9c58cd5d-hpxsc Total loading time: 0 Render date: 2025-03-15T16:33:21.742Z Has data issue: false hasContentIssue false

Developing National Standards for Public Health Emergency Preparedness With a Limited Evidence Base

Published online by Cambridge University Press:  08 April 2013

Rights & Permissions [Opens in a new window]

Abstract

Objective: The paucity of evidence and wide variation among communities creates challenges for developing congressionally mandated national performance standards for public health preparedness. Using countermeasure dispensing as an example, we present an approach for developing standards that balances national uniformity and local flexibility, consistent with the quality of evidence available.

Methods: We used multiple methods, including a survey of community practices, mathematical modeling, and expert panel discussion.

Results: The article presents recommended dispensing standards, along with a general framework that can be used to analyze tradeoffs involved in developing other preparedness standards.

Conclusions: Standards can be developed using existing evidence, but would be helped immensely by a stronger evidence base.

(Disaster Med Public Health Preparedness. 2010;4:285-290)

Type
Original Article
Copyright
Copyright © Society for Disaster Medicine and Public Health, Inc. 2010

Efforts to develop measures (ie, the observable “yardsticks” used to judge performance) and standards (thresholds that define how good is good enough on the measures) for public health preparedness have been under way since 2002. However, the lack of frequent real-world opportunities to study preparedness for large-scale public health emergencies has limited the degree to which they can be based on strong empirical evidence. Furthermore, the variation in risk profiles, community characteristics, and governance structures across the nation's 2600 health departments means that standards must strike a balance between the simplicity associated with national uniformity and the need for flexibility to ensure that the standards are not counterproductive in some communities. These challenges notwithstanding, the Pandemic and All-Hazards Preparedness Act (PAHPA)1 makes federal funding to states contingent upon their ability to meet evidence-based performance standards, as assessed by performance measures.

In an effort to respond to the congressional mandate and begin to address the aforementioned challenges, the Department of Health and Human Services Office of the Assistant Secretary for Preparedness and Response asked RAND Corporation to work with the Centers for Disease Control and Prevention's (CDC) Division of Strategic National Stockpile (DSNS) to develop recommended standards on “countermeasure delivery,” the ability to quickly deliver antibiotics, antivirals, or antidotes to the public in the event of an outbreak or some other public health emergency.

The nation's key asset for countermeasure delivery is the Strategic National Stockpile (SNS), a cache of medical countermeasures and supplies managed by CDC and stored in several locations around the country. In a public health emergency, SNS materiel can arrive at affected states within 12 hours of the federal decision to deploy. States are then responsible for the distribution of this materiel to local areas, which are then responsible for dispensing it to the public. The national goal, as specified by the CDC's Cities Readiness Initiative (CRI) program (a federal program developed to help metropolitan areas respond to an anthrax attack), is for metropolitan areas to be able to deliver antibiotics or other medical countermeasures to all individuals within 48 hours of the decision to do so.2

PAHPA called for evidence-based standards, but the randomized clinical trials and comparison-group study designs that would be ordinarily considered optimal evidence for most clinical or public health interventionsReference Campbell, Fitzpatrick, Kinmouth, Sandercock, Spiegelhalter and Tyrer3 would be unrealistic in this context. Still, the difficulty in using these study designs in the context of preparedness should not preclude the use of other, albeit less rigorous, sources of evidence and analysis. The standards development process described in this article attempts to address the challenges of weak evidence and the need for flexibility. It does so by using a multipronged approach that supplements and goes beyond the expert consensus-based process often used in the fieldReference Lenaway, Halverson, Sotnikov, Tilson, Corso and Millington4 and which often fails to provide enough statistical and practical grounding in the tradeoffs involved in selecting standards.

This article describes the countermeasure dispensing standards, the process used to develop them in the absence of a strong empirical evidence base, and some key lessons this effort provides for individuals who are developing standards in other areas of public health preparedness and homeland security. We also provide a taxonomy of approaches to structuring standards that strikes an appropriate balance between the desire for simple, uniform national standards and the desire to accommodate reasonable local variation in approaches to dispensing, consistent with the quality of available evidence.

METHODS

The Department of Health and Human Services requested that the first group of countermeasure standards focus only on dispensing medications to individuals within points of dispensing (POD); standards for other aspects such as distribution from state warehouses to local areas were left for a future effort. The Department also requested that the standards define minimum requirements for number and location of PODs, internal POD operations, POD staffing, and POD security. These infrastructure standards were viewed as a precursor to future development of standards for operational capabilities.

In developing our approach, we began by examining the standards-development methods used in other sectors, including fire protection and education.5Reference Canada678Reference Lehr9Reference Cizek, Bunch and Koons10 We settled on a mixed approach, in which expert panel deliberations were informed by use of empirical and modeling evidence, as available.

Review of Existing Dispensing Practices

Before convening the panel, we conducted a review of mass dispensing practices in CRI sites. RAND and CDC DSNS staff collected information on current POD infrastructure, plans, and operations from 19 of the 21 original CRI sites (2 sites declined to provide data) to learn about current POD planning and the considerations and tradeoffs that inform POD planners in addressing POD location, staffing, operational, and security issues. It was originally envisioned as an effort to collect best practices, but in the absence of clear evidence regarding outcomes that would result from these practices, it was difficult to declare any practice to be “best.”

Mathematical Models

We also used mathematical models of POD operations and of POD locations to frame discussion of “what if ” questions about what levels of population coverage could be achieved if cities were to conform to various proposed standards and under various assumptions about communities' geographical characteristics.

Expert Panel

The expert panel included 13 representatives from federal, state, and local health departments, emergency management agencies, and security agencies, and included a blend of subject-matter expertise on countermeasure dispensing and practical experience with health departments (a list of panelists appears at the end of this article). The panel commented on the 4 POD standards areas and was provided with data from the surveys and mathematical modeling to inform the discussion. Ideally, panel deliberations would be backed with evidence linking practices of mass countermeasure delivery to outcomes of reduced morbidity and mortality, the type of research synthesis typically provided to panels on clinical or public health interventions. Although such evidence is unavailable, we were able to present information that helped panelists weigh tradeoffs in stringency and uniformity against flexibility and practicality.

For instance, deliberations about internal POD operations were informed by model-based predictions about the number and composition of staff required by various levels of care. This helped panelists weigh tradeoffs between level of care provided (benefits) and staffing requirements (costs). Use of the models also helped ensure that the resulting standards were aligned with the rather aggressive 48-hour goal of the CRI. Technical detail on the models and the results of the modeling can be found elsewhere.Reference Nelson, Chan and Chandra11

After the expert panel meeting, a first draft of the standards was critiqued by CDC DSNS staff and then by the expert panel. CDC DSNS then distributed a revised version of the draft standards to all 72 CRI sites for review and comment. We received 38 sets of written comments from state and local health departments in 26 states and oral feedback from some 4 dozen individuals during a pair of 2-hour teleconference sessions. The standards were finalized after consulting with key staff from the Health and Human Services Office of the Assistant Secretary for Preparedness and Response and CDC DSNS.

RESULTS

The process described above yielded 13 recommended standards covering POD locations, internal operations, staffing, and security. The standards ranged from those imposing uniform requirements across all communities to those allowing considerable local flexibility, depending on the strength of the available evidence on that particular area of POD activity. In the following sections, we describe briefly the categories of standards, explain how they mapped to the type of evidence available, and offer examples of standards that fit into these categories. A full list of the standards can be found in the Appendix, and a detailed technical exposition of the standards and methods used to generate them can be found elsewhere.Reference Nelson, Chan and Chandra11 Because the standards define minimal levels of performance and do not cover all critical aspects of POD infrastructure, we emphasize that jurisdictions could be fully compliant with the proposed standards and still not be able to mount a fully successful response.

TABLE 2 Appendix. Recommended Infrastructure Standards for PODs

The categories of standards (based loosely on a typology of regulatory tools)12 are shown in Table 1 in increasing order of flexibility (from left to right). These are related to decreases in strength of evidence in the rows (top to bottom). As the evidence quality decreases, the stringency of the standard also decreases, a relation that may be thought of as the diagonal entries of the matrix. The remainder of this section describes each category of standards and the type of evidence used to support it.

TABLE 1 Decreases in Strength of Evidence Necessitate Increases in Degree of Flexibility

Uniform Requirements

The first category of standards imposes a single, uniform requirement on all awardees, regardless of community characteristics. Standards that fall into those categories are less flexible and build on observable outcomes and results of modeling. Given the lack of data from randomized controlled trials and strong comparison group studies based on real incidents, we used our mathematical models to extrapolate from observed evidence to predict outcomes in situations that have not been observed directly. For instance, epidemiological models conclude that delivering countermeasures to affected individuals within 48 hours would likely prevent ≥95% of anthrax cases in a metropolitan population.Reference Wilkening13Reference Wein, Craft and Kaplan14 These models were used to drive the overall CRI target of full-community dispensing within 48 hours, which is an example of a uniform requirement.

Consistency Standards

In some instances, the evidence base is not sufficient to support a single uniform requirement, but it is strong enough to mandate internal consistency among planning elements. These standards prescribe a set of mathematically defined relations among infrastructure elements but leave it to grantees to select which combination of elements is best for their jurisdictions. For instance, simple arithmetic indicates that 10 PODs processing 500 people per hour would, other things being equal, produce the same level of operational output as 20 PODs that process 250 people per hour. Therefore, instead of prescribing a set number of PODs or a required minimum throughput at all PODs, standard 1.2 (Appendix) ensures internal consistency between the number of PODs and other critical planning elements by requiring that jurisdictions' plans adhere to the following mathematical relation:

Analytical Standards

In other instances, some knowledge exists but is incomplete. Consequently, expert judgment is required but can be informed by modeling results and other forms of analysis. Given the paucity of real-world experience with mass countermeasure delivery, much of the information considered by the panel fell into this category. These analyses helped panelists to weigh the tradeoffs between the benefits of requiring high levels of care at PODs and low travel distances to PODs (as assessed by their judgment) vs the costs of setting up and staffing the PODs (as assessed by our models), but they could not point directly to specific standards. In most such instances, this led panelists to select standards that require jurisdiction to undertake an auditable analytical process, but that do not prescribe specific plans or actions.

For example, when considering standards for the number and location of PODs, the panel was hampered by a lack of evidence regarding the benefit of reducing patient travel distance to PODs. But it was able to calculate the cost (in terms of additional POD sites) of imposing minimal travel distance requirements. Applying mathematical location models to 3 “case” metropolitan areas showed that a standard on maximum travel distance to PODs could be easily met in a dense urban area. However, the same standard, applied to a larger suburban area, would force a jurisdiction to open large numbers of sparsely attended PODs, potentially wasting resources.

Process Standards

In some cases, the information available could be at best described as requiring a judgment call because there was no basis for modeling or relatively few data. These panel discussions were informed by the aforementioned survey data on current practice at CRI sites. On the basis of this review, panelists concluded that these standards should simply focus on (nonanalytical) planning processes. For instance, rather than enumerating all of the security requirements at PODs, standard 4.1 (Appendix) requires health department planners to have consulted with security officials in drafting their POD security plans.

In some instances, a combination of weak evidence and failure to achieve consensus-produced standards were not well categorized into 1 of the 4 groupings described above. For instance, standard 4.3, alternative 1 (Appendix) requires the presence of at least 1 law enforcement officer at each POD location. This recommended standard, which was strongly endorsed by the security experts on the panel, would be categorized as a uniform requirement, even though the outcome evidence was less than clear. Because of the lack of evidence and consensus, the process also produced an alternative version (standard 4.3, alternative 2), which did not require the physical presence of law enforcement at each POD.

COMMENT

The recommended standards were, with minor modifications by CDC DSNS, published as part of their fiscal year 2009 Office of Public Health and Emergency Preparedness Cooperative Agreement guidance.15 Although it is too early to assess the impact of the standards on preparedness, the standards development process yielded several important lessons for other attempts to develop standards for public health preparedness or homeland security.

Consistency, Analytical, and Process Standards Can Help Strike a Balance Between Uniformity and Flexibility

An important function of performance standards is to reduce unwarranted variability. With POD infrastructure, however, it appears that there is a considerable amount of warranted variability, which argues for a considerable degree of flexibility. Standards might address the issue of local variation by focusing on outcomes or outputs, holding jurisdictions or other service providers accountable for demonstrating (through exercises or small-scale incidents) a certain level of operational capability but allowing them to use whatever infrastructure configurations can achieve those goals effectively, efficiently, and reliably.Reference Coglianese, Nash and Olmsted16 Fair enforcement of operational capability standards, however, would almost necessarily rely heavily on the ability to measure operational capabilities. Although considerable progress has been made, the science of public health preparedness measurement is still in its infancy. Thus, in the near term, standards likely must focus to a significant extent on infrastructure configurations. We believe that the types of standards presented in this article (eg, consistency standards, analytical standards, process standards) provide a reasonable approach to finding the right degree of flexibility in national standards.

Nature of Evidence Behind Standards

Given the state of the science in public health preparedness, the congressional mandate for evidence-based standards will be difficult to achieve if “evidence-based” implies the standards of proof that are normally required for clinical and other public health interventions. Thus, it is necessary to take immediate action to improve the evidence base for public health preparedness, including the development of a more systematic approach to collecting exercise-based performance data and information on response processes. This action will facilitate more systematic identification of best practices around which to craft standards. Such data could also support more vigorous attempts to further develop and validate the kind of computer models used in the development of the POD standards.Reference Nelson, Beckjord and Dausey17 Research that may inform the development of these data systems could be undertaken by, among others, the PAHPA-mandated preparedness and emergency response research centers, which conduct public health systems research on preparedness and response capabilities at the national, state, local, and tribal levels.

Given the congressional mandate, standards development cannot be put off until the evidence base has matured. Expert panels and consensus-based methods will remain important standards-development methods for the foreseeable future. However, as this article demonstrates, consensus-based approaches can be guided and supplemented by systematic analysis, if not always by direct empirical evidence from responses. Countermeasure delivery is unique in the extent to which key processes can be represented and modeled mathematically, but it is far from the only such capability; other standards may be usefully informed by disease progression models and behavioral responses to public health interventions.Reference Moore, Chan and Lurie18Reference Fowler, Sanders and Bravata19Reference Kaplan, Craft and Wein20Reference Bozzette, Boer and Bhatnagar21 Additional lessons for standards may be gleaned from smaller-scale proxy events, such as routine outbreaks of food- and waterborne disease.Reference Nelson, Beckjord and Dausey17Reference Rendin, Welch and Kaplowitz22Reference Stoto, Dausey and Davis23

Need for Additional Policy Guidance

Even with better data, however, it is unlikely that standards development will ever be fully evidence driven. Policymakers must be prepared to make decisions about how much preparedness (and therefore stringency in standards) is worth paying for and how to weigh the benefits of various preparedness investments. They must also be prepared to make tough decisions about how much to hold public health agencies accountable for the actions of other actors. For instance, debate over alternate versions of standard 4.3 raised the vexing issue of whether health departments should be held accountable for the ability and willingness of law enforcement agencies to assign at least 1 officer to each POD.

Importance of Incorporating Stakeholders Into Standards Development

Finally, the POD standards development process demonstrated the importance of involving stakeholders in it. Gaining some level of buy-in is likely to promote more workable standards and higher degrees of compliance. Development of standards can often be contentious, especially when the stakes may be high. Although the POD standards process often led to heated debates, anecdotal evidence suggests a broad degree of acceptance among grantees. However, policymakers and others should bear in mind that extensive stakeholder engagement can add considerably to project timelines.

CONCLUSIONS

Performance standards can help define preparedness at an operational level, provide clear targets for improvement, and provide guidance on how much to invest in specific capabilities. The POD infrastructure standards described in this article represent an early attempt to develop and apply a feasible standards-development method for public health preparedness and other homeland security programs. However, efforts to develop standards for specific and detailed aspects of preparedness such as POD infrastructure remain limited by the absence of a national consensus on what is included in preparedness and how much the nation is willing to invest in it. The development of additional standards need not await such a consensus, but would be helped immensely by it.

MEMBERS OF THE EXPERT PANEL ON POD STANDARDS

  • Erik Auf der Heide, CDC

  • Douglas Ball, New York City Department of Health and Mental Hygiene

  • Jeff Blystone, Pennsylvania Department of Health

  • Jody Chattin, Chicago Office of Emergency Management and Communications

  • Ken Kunchick, US Marshals Service

  • Gene Matthews, University of North Carolina

  • Matthew Minson, Maryland Department of Health

  • Matthew Sharpe, Tulsa (Oklahoma) Department of Health

  • Glen Tao, Los Angeles County Department of Health

  • Ruth Thornburg, CDC

  • John H. H. Turner III, Business Executives for National Security

  • George Whitney, Multnomah County, Oregon, Emergency Management

  • Kathy Wood, Montgomery County, Maryland, Department of Health and Human Services

  • Stephanie Dulin and Patricia Pettis, CDC, served as members ex officio

Author Disclosures: The author reports no conflicts of interest. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.

References

REFERENCES

1.Pandemic and All-Hazards Preparedness Act. Pub L No. 101 et seq. 2006:109-417.Google Scholar
2.Program Announcement AA154 FY 2008. Atlanta, GA: Centers for Disease Control and Prevention; 2009.Google Scholar
3.Campbell, M, Fitzpatrick, R, Kinmouth, AL, Sandercock, P, Spiegelhalter, D, Tyrer, P.Education and debate: framework for design and evaluation of complex interventions to improve public health. BMJ. 2000;321:694696.CrossRefGoogle Scholar
4.Lenaway, D, Halverson, P, Sotnikov, S, Tilson, H, Corso, L, Millington, W.Public health systems research: setting a national agenda. Am J Public Health. 2006;96 (3):410413.Google Scholar
5.National Fire Protection Association. Performance-based primer: Codes & standards preparation. Quincy, MA: NFPA; January 21, 2000. http://www.nfpa.org/assets/files/PDF/PBPrimerCombined.pdf. Accessed October 22, 2010.Google Scholar
6.Canada, B.Homeland Security: Standards for State and Local Preparedness. Washington, DC: Congressional Research Service; 2003.Google Scholar
7.American National Standards Institute. Overview of the US standardization system: voluntary consensus standards and conformity assessment activities. http://www.standardsportal.org/usa_en/standards_system.aspx. Published 2007. Accessed September 29, 2010.Google Scholar
8.American National Standards Institute. Essential requirements: due process requirements for American national standards. http://publicaa.ansi.org/sites/apdl/Documents/Standards%20Activities/American%20National%20Standards/Procedures,%20Guides,%20and%20Forms/2010%20ANSI%20Essential%20Requirements%20and%20Related/2010%20ANSI%20Essential%20Requirements.pdf. Published 2008. Accessed September 29, 2010.Google Scholar
9.Lehr, W.Understanding the process. J Am Soc Inf Sci. 1992;43:550555.Google Scholar
10.Cizek, G, Bunch, M, Koons, H.Setting performance standards: contemporary methods. Educ Meas Issues Pract. 2004;Winter:31-50.Google Scholar
11.Nelson, C, Chan, E, Chandra, A, et alRecommended infrastructure standards for mass antibiotic dispensing. http://www.rand.org/pubs/technical_reports/2008/RAND_TR553.pdf. Published 2008. Accessed September 29, 2010.Google Scholar
12.US Congress Office of Technology Assessment. Preventing illness and injury in the workplace. http://govinfo.library.unt.edu/ota/Ota_4/DATA/1985/8519.PDF. Published 1985. Accessed September 29, 2010.Google Scholar
13.Wilkening, DA.Sverdlovsk revisited: modeling human inhalation anthrax. Proc Natl Acad Sci U S A. 2006;103 (20):75897594.Google Scholar
14.Wein, LM, Craft, DL, Kaplan, EH.Emergency response to an anthrax attack. Proc Natl Acad Sci U S A. 2003;100 (7):43464351.Google Scholar
15.Centers for Disease Control and Prevention. Point of dispensing (POD) standards. http://www.bt.cdc.gov/cotper/coopagreement/08/pdf/POD.pdf. Published 2008. Accessed October 22, 2010.Google Scholar
16.Coglianese, C, Nash, J, Olmsted, T.Performed-based regulation: prospects and limitations in health, safety and environmental protection. Adm Law Rev. 2003;55:705730.Google Scholar
17.Nelson, C, Beckjord, E, Dausey, D, et alHow can we strengthen the evidence base for public health emergency preparedness. Disaster Med Public Health Prep. 2008;2:14.Google Scholar
18.Moore, M, Chan, E, Lurie, N, et alImproving Global Influenza Surveillance: Strategies for the U.S. Government. Santa Monica, CA: RAND Corporation; 2007.Google Scholar
19.Fowler, RA, Sanders, GD, Bravata, DM, et alCost-effectiveness of defending against bioterrorism: a comparison of vaccination and antibiotic prophylaxis against anthrax. Ann Intern Med. 2005;142 (8):601610.Google Scholar
20.Kaplan, EH, Craft, DL, Wein, LM.Emergency response to a smallpox attack: the case for mass vaccination. Proc Natl Acad Sci U S A. 2002;99 (16):1093510940.Google Scholar
21.Bozzette, SA, Boer, R, Bhatnagar, V, et alA model for a smallpox-vaccination policy. N Engl J Med. 2003;348 (5):416425.Google Scholar
22.Rendin, RW, Welch, NM, Kaplowitz, LG.Leveraging bioterrorism preparedness for non-bioterrorism events: a public health example. Biosecur Bioterror. 2005;3 (4):309315.Google Scholar
23.Stoto, M, Dausey, D, Davis, L, et alLearning From Experience: The Public Health Response to West Nile Virus, SARS, Monkeypox, and Hepatitis A Outbreaks in the United States. Santa Monica, CA: RAND Corporation; 2005.Google Scholar
Figure 0

TABLE 2 Appendix. Recommended Infrastructure Standards for PODs

Figure 1

TABLE 1 Decreases in Strength of Evidence Necessitate Increases in Degree of Flexibility