Efforts to develop measures (ie, the observable “yardsticks” used to judge performance) and standards (thresholds that define how good is good enough on the measures) for public health preparedness have been under way since 2002. However, the lack of frequent real-world opportunities to study preparedness for large-scale public health emergencies has limited the degree to which they can be based on strong empirical evidence. Furthermore, the variation in risk profiles, community characteristics, and governance structures across the nation's 2600 health departments means that standards must strike a balance between the simplicity associated with national uniformity and the need for flexibility to ensure that the standards are not counterproductive in some communities. These challenges notwithstanding, the Pandemic and All-Hazards Preparedness Act (PAHPA)1 makes federal funding to states contingent upon their ability to meet evidence-based performance standards, as assessed by performance measures.
In an effort to respond to the congressional mandate and begin to address the aforementioned challenges, the Department of Health and Human Services Office of the Assistant Secretary for Preparedness and Response asked RAND Corporation to work with the Centers for Disease Control and Prevention's (CDC) Division of Strategic National Stockpile (DSNS) to develop recommended standards on “countermeasure delivery,” the ability to quickly deliver antibiotics, antivirals, or antidotes to the public in the event of an outbreak or some other public health emergency.
The nation's key asset for countermeasure delivery is the Strategic National Stockpile (SNS), a cache of medical countermeasures and supplies managed by CDC and stored in several locations around the country. In a public health emergency, SNS materiel can arrive at affected states within 12 hours of the federal decision to deploy. States are then responsible for the distribution of this materiel to local areas, which are then responsible for dispensing it to the public. The national goal, as specified by the CDC's Cities Readiness Initiative (CRI) program (a federal program developed to help metropolitan areas respond to an anthrax attack), is for metropolitan areas to be able to deliver antibiotics or other medical countermeasures to all individuals within 48 hours of the decision to do so.2
PAHPA called for evidence-based standards, but the randomized clinical trials and comparison-group study designs that would be ordinarily considered optimal evidence for most clinical or public health interventionsReference Campbell, Fitzpatrick, Kinmouth, Sandercock, Spiegelhalter and Tyrer3 would be unrealistic in this context. Still, the difficulty in using these study designs in the context of preparedness should not preclude the use of other, albeit less rigorous, sources of evidence and analysis. The standards development process described in this article attempts to address the challenges of weak evidence and the need for flexibility. It does so by using a multipronged approach that supplements and goes beyond the expert consensus-based process often used in the fieldReference Lenaway, Halverson, Sotnikov, Tilson, Corso and Millington4 and which often fails to provide enough statistical and practical grounding in the tradeoffs involved in selecting standards.
This article describes the countermeasure dispensing standards, the process used to develop them in the absence of a strong empirical evidence base, and some key lessons this effort provides for individuals who are developing standards in other areas of public health preparedness and homeland security. We also provide a taxonomy of approaches to structuring standards that strikes an appropriate balance between the desire for simple, uniform national standards and the desire to accommodate reasonable local variation in approaches to dispensing, consistent with the quality of available evidence.
METHODS
The Department of Health and Human Services requested that the first group of countermeasure standards focus only on dispensing medications to individuals within points of dispensing (POD); standards for other aspects such as distribution from state warehouses to local areas were left for a future effort. The Department also requested that the standards define minimum requirements for number and location of PODs, internal POD operations, POD staffing, and POD security. These infrastructure standards were viewed as a precursor to future development of standards for operational capabilities.
In developing our approach, we began by examining the standards-development methods used in other sectors, including fire protection and education.5Reference Canada678Reference Lehr9Reference Cizek, Bunch and Koons10 We settled on a mixed approach, in which expert panel deliberations were informed by use of empirical and modeling evidence, as available.
Review of Existing Dispensing Practices
Before convening the panel, we conducted a review of mass dispensing practices in CRI sites. RAND and CDC DSNS staff collected information on current POD infrastructure, plans, and operations from 19 of the 21 original CRI sites (2 sites declined to provide data) to learn about current POD planning and the considerations and tradeoffs that inform POD planners in addressing POD location, staffing, operational, and security issues. It was originally envisioned as an effort to collect best practices, but in the absence of clear evidence regarding outcomes that would result from these practices, it was difficult to declare any practice to be “best.”
Mathematical Models
We also used mathematical models of POD operations and of POD locations to frame discussion of “what if ” questions about what levels of population coverage could be achieved if cities were to conform to various proposed standards and under various assumptions about communities' geographical characteristics.
Expert Panel
The expert panel included 13 representatives from federal, state, and local health departments, emergency management agencies, and security agencies, and included a blend of subject-matter expertise on countermeasure dispensing and practical experience with health departments (a list of panelists appears at the end of this article). The panel commented on the 4 POD standards areas and was provided with data from the surveys and mathematical modeling to inform the discussion. Ideally, panel deliberations would be backed with evidence linking practices of mass countermeasure delivery to outcomes of reduced morbidity and mortality, the type of research synthesis typically provided to panels on clinical or public health interventions. Although such evidence is unavailable, we were able to present information that helped panelists weigh tradeoffs in stringency and uniformity against flexibility and practicality.
For instance, deliberations about internal POD operations were informed by model-based predictions about the number and composition of staff required by various levels of care. This helped panelists weigh tradeoffs between level of care provided (benefits) and staffing requirements (costs). Use of the models also helped ensure that the resulting standards were aligned with the rather aggressive 48-hour goal of the CRI. Technical detail on the models and the results of the modeling can be found elsewhere.Reference Nelson, Chan and Chandra11
After the expert panel meeting, a first draft of the standards was critiqued by CDC DSNS staff and then by the expert panel. CDC DSNS then distributed a revised version of the draft standards to all 72 CRI sites for review and comment. We received 38 sets of written comments from state and local health departments in 26 states and oral feedback from some 4 dozen individuals during a pair of 2-hour teleconference sessions. The standards were finalized after consulting with key staff from the Health and Human Services Office of the Assistant Secretary for Preparedness and Response and CDC DSNS.
RESULTS
The process described above yielded 13 recommended standards covering POD locations, internal operations, staffing, and security. The standards ranged from those imposing uniform requirements across all communities to those allowing considerable local flexibility, depending on the strength of the available evidence on that particular area of POD activity. In the following sections, we describe briefly the categories of standards, explain how they mapped to the type of evidence available, and offer examples of standards that fit into these categories. A full list of the standards can be found in the Appendix, and a detailed technical exposition of the standards and methods used to generate them can be found elsewhere.Reference Nelson, Chan and Chandra11 Because the standards define minimal levels of performance and do not cover all critical aspects of POD infrastructure, we emphasize that jurisdictions could be fully compliant with the proposed standards and still not be able to mount a fully successful response.
TABLE 2 Appendix. Recommended Infrastructure Standards for PODs

The categories of standards (based loosely on a typology of regulatory tools)12 are shown in Table 1 in increasing order of flexibility (from left to right). These are related to decreases in strength of evidence in the rows (top to bottom). As the evidence quality decreases, the stringency of the standard also decreases, a relation that may be thought of as the diagonal entries of the matrix. The remainder of this section describes each category of standards and the type of evidence used to support it.
TABLE 1 Decreases in Strength of Evidence Necessitate Increases in Degree of Flexibility

Uniform Requirements
The first category of standards imposes a single, uniform requirement on all awardees, regardless of community characteristics. Standards that fall into those categories are less flexible and build on observable outcomes and results of modeling. Given the lack of data from randomized controlled trials and strong comparison group studies based on real incidents, we used our mathematical models to extrapolate from observed evidence to predict outcomes in situations that have not been observed directly. For instance, epidemiological models conclude that delivering countermeasures to affected individuals within 48 hours would likely prevent ≥95% of anthrax cases in a metropolitan population.Reference Wilkening13Reference Wein, Craft and Kaplan14 These models were used to drive the overall CRI target of full-community dispensing within 48 hours, which is an example of a uniform requirement.
Consistency Standards
In some instances, the evidence base is not sufficient to support a single uniform requirement, but it is strong enough to mandate internal consistency among planning elements. These standards prescribe a set of mathematically defined relations among infrastructure elements but leave it to grantees to select which combination of elements is best for their jurisdictions. For instance, simple arithmetic indicates that 10 PODs processing 500 people per hour would, other things being equal, produce the same level of operational output as 20 PODs that process 250 people per hour. Therefore, instead of prescribing a set number of PODs or a required minimum throughput at all PODs, standard 1.2 (Appendix) ensures internal consistency between the number of PODs and other critical planning elements by requiring that jurisdictions' plans adhere to the following mathematical relation:

Analytical Standards
In other instances, some knowledge exists but is incomplete. Consequently, expert judgment is required but can be informed by modeling results and other forms of analysis. Given the paucity of real-world experience with mass countermeasure delivery, much of the information considered by the panel fell into this category. These analyses helped panelists to weigh the tradeoffs between the benefits of requiring high levels of care at PODs and low travel distances to PODs (as assessed by their judgment) vs the costs of setting up and staffing the PODs (as assessed by our models), but they could not point directly to specific standards. In most such instances, this led panelists to select standards that require jurisdiction to undertake an auditable analytical process, but that do not prescribe specific plans or actions.
For example, when considering standards for the number and location of PODs, the panel was hampered by a lack of evidence regarding the benefit of reducing patient travel distance to PODs. But it was able to calculate the cost (in terms of additional POD sites) of imposing minimal travel distance requirements. Applying mathematical location models to 3 “case” metropolitan areas showed that a standard on maximum travel distance to PODs could be easily met in a dense urban area. However, the same standard, applied to a larger suburban area, would force a jurisdiction to open large numbers of sparsely attended PODs, potentially wasting resources.
Process Standards
In some cases, the information available could be at best described as requiring a judgment call because there was no basis for modeling or relatively few data. These panel discussions were informed by the aforementioned survey data on current practice at CRI sites. On the basis of this review, panelists concluded that these standards should simply focus on (nonanalytical) planning processes. For instance, rather than enumerating all of the security requirements at PODs, standard 4.1 (Appendix) requires health department planners to have consulted with security officials in drafting their POD security plans.
In some instances, a combination of weak evidence and failure to achieve consensus-produced standards were not well categorized into 1 of the 4 groupings described above. For instance, standard 4.3, alternative 1 (Appendix) requires the presence of at least 1 law enforcement officer at each POD location. This recommended standard, which was strongly endorsed by the security experts on the panel, would be categorized as a uniform requirement, even though the outcome evidence was less than clear. Because of the lack of evidence and consensus, the process also produced an alternative version (standard 4.3, alternative 2), which did not require the physical presence of law enforcement at each POD.
COMMENT
The recommended standards were, with minor modifications by CDC DSNS, published as part of their fiscal year 2009 Office of Public Health and Emergency Preparedness Cooperative Agreement guidance.15 Although it is too early to assess the impact of the standards on preparedness, the standards development process yielded several important lessons for other attempts to develop standards for public health preparedness or homeland security.
Consistency, Analytical, and Process Standards Can Help Strike a Balance Between Uniformity and Flexibility
An important function of performance standards is to reduce unwarranted variability. With POD infrastructure, however, it appears that there is a considerable amount of warranted variability, which argues for a considerable degree of flexibility. Standards might address the issue of local variation by focusing on outcomes or outputs, holding jurisdictions or other service providers accountable for demonstrating (through exercises or small-scale incidents) a certain level of operational capability but allowing them to use whatever infrastructure configurations can achieve those goals effectively, efficiently, and reliably.Reference Coglianese, Nash and Olmsted16 Fair enforcement of operational capability standards, however, would almost necessarily rely heavily on the ability to measure operational capabilities. Although considerable progress has been made, the science of public health preparedness measurement is still in its infancy. Thus, in the near term, standards likely must focus to a significant extent on infrastructure configurations. We believe that the types of standards presented in this article (eg, consistency standards, analytical standards, process standards) provide a reasonable approach to finding the right degree of flexibility in national standards.
Nature of Evidence Behind Standards
Given the state of the science in public health preparedness, the congressional mandate for evidence-based standards will be difficult to achieve if “evidence-based” implies the standards of proof that are normally required for clinical and other public health interventions. Thus, it is necessary to take immediate action to improve the evidence base for public health preparedness, including the development of a more systematic approach to collecting exercise-based performance data and information on response processes. This action will facilitate more systematic identification of best practices around which to craft standards. Such data could also support more vigorous attempts to further develop and validate the kind of computer models used in the development of the POD standards.Reference Nelson, Beckjord and Dausey17 Research that may inform the development of these data systems could be undertaken by, among others, the PAHPA-mandated preparedness and emergency response research centers, which conduct public health systems research on preparedness and response capabilities at the national, state, local, and tribal levels.
Given the congressional mandate, standards development cannot be put off until the evidence base has matured. Expert panels and consensus-based methods will remain important standards-development methods for the foreseeable future. However, as this article demonstrates, consensus-based approaches can be guided and supplemented by systematic analysis, if not always by direct empirical evidence from responses. Countermeasure delivery is unique in the extent to which key processes can be represented and modeled mathematically, but it is far from the only such capability; other standards may be usefully informed by disease progression models and behavioral responses to public health interventions.Reference Moore, Chan and Lurie18Reference Fowler, Sanders and Bravata19Reference Kaplan, Craft and Wein20Reference Bozzette, Boer and Bhatnagar21 Additional lessons for standards may be gleaned from smaller-scale proxy events, such as routine outbreaks of food- and waterborne disease.Reference Nelson, Beckjord and Dausey17Reference Rendin, Welch and Kaplowitz22Reference Stoto, Dausey and Davis23
Need for Additional Policy Guidance
Even with better data, however, it is unlikely that standards development will ever be fully evidence driven. Policymakers must be prepared to make decisions about how much preparedness (and therefore stringency in standards) is worth paying for and how to weigh the benefits of various preparedness investments. They must also be prepared to make tough decisions about how much to hold public health agencies accountable for the actions of other actors. For instance, debate over alternate versions of standard 4.3 raised the vexing issue of whether health departments should be held accountable for the ability and willingness of law enforcement agencies to assign at least 1 officer to each POD.
Importance of Incorporating Stakeholders Into Standards Development
Finally, the POD standards development process demonstrated the importance of involving stakeholders in it. Gaining some level of buy-in is likely to promote more workable standards and higher degrees of compliance. Development of standards can often be contentious, especially when the stakes may be high. Although the POD standards process often led to heated debates, anecdotal evidence suggests a broad degree of acceptance among grantees. However, policymakers and others should bear in mind that extensive stakeholder engagement can add considerably to project timelines.
CONCLUSIONS
Performance standards can help define preparedness at an operational level, provide clear targets for improvement, and provide guidance on how much to invest in specific capabilities. The POD infrastructure standards described in this article represent an early attempt to develop and apply a feasible standards-development method for public health preparedness and other homeland security programs. However, efforts to develop standards for specific and detailed aspects of preparedness such as POD infrastructure remain limited by the absence of a national consensus on what is included in preparedness and how much the nation is willing to invest in it. The development of additional standards need not await such a consensus, but would be helped immensely by it.
MEMBERS OF THE EXPERT PANEL ON POD STANDARDS
Erik Auf der Heide, CDC
Douglas Ball, New York City Department of Health and Mental Hygiene
Jeff Blystone, Pennsylvania Department of Health
Jody Chattin, Chicago Office of Emergency Management and Communications
Ken Kunchick, US Marshals Service
Gene Matthews, University of North Carolina
Matthew Minson, Maryland Department of Health
Matthew Sharpe, Tulsa (Oklahoma) Department of Health
Glen Tao, Los Angeles County Department of Health
Ruth Thornburg, CDC
John H. H. Turner III, Business Executives for National Security
George Whitney, Multnomah County, Oregon, Emergency Management
Kathy Wood, Montgomery County, Maryland, Department of Health and Human Services
Stephanie Dulin and Patricia Pettis, CDC, served as members ex officio
Author Disclosures: The author reports no conflicts of interest. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.