Syndromic surveillance uses prediagnostic health-related data to identify outbreaks of disease that may warrant further public health response. Methods of syndromic surveillance include using data sources such as nurse hotline calls, over-the-counter medication purchases, and chief complaints from emergency departments to monitor clusters of similar illness based on shared clinical presentation, and in some cases tracking a single case of reportable disease. In theory, syndromic surveillance allows for earlier detection of epidemics and more timely public health response, due to lack of reliance on clinicians for recognition and active reporting of disease clusters to public health departments. Following September 11, 2001, many public health departments across the United States widely invested in and implemented syndromic surveillance systems, principally for the purpose of early detection of bioterrorism events.Reference Bravata, McDonald and Smith1, Reference Uscher-Pines, Farrell and Cattani2 In spite of the significant financial and resource investment to date,3, Reference Buehler, Sonricker and Paladini4 research on syndromic surveillance has focused primarily on performance characteristics of systems and comparison of various detection algorithms,Reference Buckeridge5 with few studies specifically addressing downstream response.Reference Hurt-Mullen and Coberly6–Reference Sokolow, Grady and Rolka11
Syndromic surveillance has been compared with a smoke detector that cannot serve its intended purpose without the timely public health response launched after aberration detection.Reference Mostashari and Hartman12 Nevertheless, the literature provides limited and mainly generic descriptions of responses to syndromic surveillance systems alerts and related guidance for public health practitioners. Reference Eban8,12,13 Notably, detailed literature reviews found an absence of uniform approaches to developing and evaluating response protocols,Reference Bravata, McDonald and Owens14 with no study describing response protocols across multiple jurisdictions. Furthermore, we are aware of only 1 peer-reviewed study that aimed to inform the development of written protocols in US public health departments, and this was limited to a single system description.Reference Hurt-Mullen and Coberly6 The study concluded that a careful development of an evaluation and response framework was needed.Reference Hurt-Mullen and Coberly6
We conducted semistructured interviews, textual analysis, and Delphi surveys to achieve 2 specific objectives: to thoroughly describe response protocols for syndromic surveillance systems in place in 8 diverse states and their surrounding local public health departments, and to develop a framework for public health departments to use as a guide in initial design and/or enhancement of response protocols. This in-depth study followed a preliminary study that provided a basic snapshot of syndromic surveillance systems in place across 50 US states and the District of Columbia.Reference Uscher-Pines, Farrell and Cattani2
METHODS
To address our first study objective, a thorough description of existing response protocols, we selected cases studies. Thirty-five states across the country with existing syndromic surveillance systems were eligible for inclusion. (Information regarding the status and types of syndromic surveillance systems in the United States was drawn from a previously conducted national survey and used to inform this sampling strategy.)Reference Uscher-Pines, Farrell and Cattani2 To secure a diverse sample set, eligible states were categorized with respect to population size (<5 million or >5 million) and locus of outbreak response (local level, state level, or both). We elected not to categorize based on additional parameters because further classification yielded insufficient sample size. Six geographically diverse states were then selected from each of 6 mutually exclusive categories (eg, small population, local level response). An additional 2 states/jurisdictions (New Jersey and the District of Columbia) beyond the initial 6 were selected based on vulnerability to terrorist attacks as defined by Urban Areas Security Initiative criteria.15 The 6 states in the United States at high risk for terrorist attacks according to these criteria were oversampled because we assumed that greater vulnerability would lead to greater investment (and thus development of promising practices) in syndromic surveillance planning, and we wanted to have a robust set of response practices for review. A total of 3 high-risk states, half of the designated high-risk states in the United States, were included as case studies; thus, high-risk states made up slightly less than half of our 8 case studies.
Because no registry of syndromic surveillance users at the state and local levels existed when this study was initiated, interview participants were selected using 3 approaches: State epidemiologists were asked to provide contact information for individuals from local health departments that were responsible for monitoring syndromic surveillance systems, interview participants were asked to identify any additional contacts (ie, snowball sampling), and the study team reviewed the academic and gray literature for examples of health departments engaged in syndromic surveillance activities. From May to September 2008, eligible individuals were contacted by telephone and semistructured interviews were conducted. A survey of a given state was considered complete when no new participants could be identified or we reached data “saturation” (ie, additional interviews with users of a regional system with common response protocols did not yield new information regarding response).
An interview guide was developed (and subsequently piloted and refined) by the research team. The interview guide focused on development, implementation, and actual experience with response protocols, as well as perceived areas for improvement. The interview guide included a broad range of questions that could be modified depending on the health department’s level of involvement in response and the structure of the public health system, but a common set of quantitative attributes was gathered at each interview (eg, the total number of systems that the health department monitored; Appendix 1). After initial piloting with 1 local health department, the guide was revised to inquire about real-world examples of responses and to clarify definitions.
To achieve the second objective—the development of a framework for response for use by public health departments—we assembled an expert panel. Two data sources were used to inform surveys of the expert panel: transcripts of participant (health department) interviews and text of response protocols. With these data sources, we derived a comprehensive list of possible response protocol elements for public health surveillance systems. The list of elements was provided to a panel of 12 national experts through e-mail communications, as part of a modified Delphi process.Reference Hasson, Keeney and McKenna16, Reference Marshall, Lockwood and Lewis17 Experts were selected based on the recommendations of the public health practice committee of the International Society for Disease Surveillance and by reviewing authors of recent, related publications. The group included systems’ monitors at the health department level, engineers, public health physicians, medical epidemiologists, and policy experts at various levels of government. Common titles of participants included medical epidemiologist, surveillance epidemiologist, and director of epidemiology/surveillance. The Delphi process, which occurred in November 2008, consisted of 2 rounds of electronic surveys. In the first round, participants received the initial written protocol elements list generated by the study team, and they were asked to comment on the completeness of the list and to add any additional elements. All of the new elements suggested by the participants were included with the exception of obvious duplicates. In the second round, participants received a questionnaire containing all of the elements identified in round 1. They were asked to consider the question, “How important is it for a health department to include a given element in its written response protocol, given the resource constraints health departments typically face?” With this question in mind, expert respondents were prompted to assign each element a rating of “not necessary,” “desirable,” or “essential.” Elements rated as essential by more than 50% of the experts were incorporated into a framework that could be used by public health departments as a guide in initial design and/or enhancement of response protocols for syndromic surveillance systems. The study was deemed exempt by the Johns Hopkins University institutional review board.
We generated descriptive statistics on response structures and procedures and coded qualitative data (ie, interview comments) by theme. In this article, we report both quantitative data in the form of simple frequencies and percentages and qualitative data in the form of participant quotations.
RESULTS
Study Objective 1: Description of Existing Protocols for Response
Description of Systems
Every sampled state and 91% (30/33) of identified health departments agreed to participate. In all, 37 individuals at 30 health departments (7 state and 23 local departments) were surveyed (Fig. 1). The professional titles of interviewees included senior epidemiologist, state epidemiologist, regional epidemiologist, surveillance epidemiologist, epidemiologist, epidemiology coordinator/manager/supervisor, and medical director.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712042307-85211-mediumThumb-S1935789300002019_fig1g.jpg?pub-status=live)
FIGURE 1 Participating states
Twenty-three (77%) health departments were found to be active users of syndromic surveillance (ie, staff monitored 1 or more systems at least 1 time per week). Health departments classified as active users monitored an average of 1.6 systems (range 1–3). Systems included (Tables 1 and 2) Real-Time Outbreak and Disease Surveillance (11), BioSense (5), Electronic Surveillance System for the Early Notification of Community-Based Epidemics (5), homegrown/unique systems (4), National Retail Data Monitor (3), FirstWatch (3), Early Aberration Reporting System (1), BioDefend (1), Syndrome Reporting Information System (1), RedBat (1), and Harvard-Pilgrim (1).
TABLE 1 Health Department (HD) Practices Among Active* Users of Syndromic Surveillance (N = 23)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712042307-71786-mediumThumb-S1935789300002019_tab1.jpg?pub-status=live)
TABLE 2 Description of Syndromic Surveillance System Monitoring Among Active Users (N = 23)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712042307-35793-mediumThumb-S1935789300002019_tab2.jpg?pub-status=live)
Written Protocols
Among active users of syndromic surveillance, 11 (48%) health departments had a written response protocol. Of these, 60% (6) said protocols had not been updated within the past 12 months, even though minor to major systems changes had occurred. A consistent theme that emerged in interviews was the lack of a systematic process in designing protocols and few available informational resources or templates for response. One health department noted frustration with the lack of guidance, “I was discouraged that when we created our protocol, there were no templates or examples.” Among active users without protocols, the lack of a written document was not generally perceived as a problem that affected public health practice. Typical quotations included the following: “It isn’t an issue that we don’t have a written protocol because our practices are well understood by staff,” and “We would follow the same actions that we would take for many other diseases and events.”
Uses of Syndromic Surveillance
The most common uses of syndromic surveillance included the following: to achieve situational awareness; to confirm, collaborate on, or rule out an event of significance; to support traditional epidemiological investigations (eg, an investigation launched per the report of a clinician in the community who contacts the health department); to do targeted case finding; and to analyze trends. Interviewees also noted novel uses such as to identify single cases of reportable disease that were not reported, to guide decision making regarding the issuing of heat advisories, to rule out BioWatch alerts or inform other surveillance activities, and to “work backward” (ie, after seeing a disease in the community, looking at the chief complaint to guide physicians regarding the presentation). Although many health departments noted that the original purpose of syndromic surveillance was early warning/detection, no health department reported using systems for this purpose. Examples of typical statements included the following: “I was a big supporter of syndromic surveillance for early warning early on, but now I am more realistic about the system’s limitations.” “Syndromic surveillance is just 1 of the many tools we use.” “Syndromic surveillance is not the answer for everything; we still mostly rely on informed docs to tell us what is going on.” Two interviewees commented that health departments in the United States tend to be overly “alert dependent.” Comments included the following: “Health departments should not be at the mercy of alerts; they need to develop their own uses for syndromic surveillance.” “Alerts based on statistics are not very helpful. We detect events, but we don’t rely on statistics.”
Regularity of Monitoring
Although the majority of health departments reported logging into systems at least once per day or receiving alert e-mails at any time, 5 (22%) local health departments reported that they could not monitor systems after-hours or on weekends. One local health department that monitored a statewide system explained, “We try to look every day, but if we miss a day, the state backs us up.”
Investigation and Notification
Written protocols and interviews revealed many options for a health department investigating an alert, as follows:
Within-Systems Investigation
• Drill down into system data (ie, describe alert by age, sex, ZIPcode, etc)
• Review line list
• Check for obvious false positives (eg, special event, “data dump,” data feed disruption, incorrect assignment of chief complaint into syndrome category, data entry errors)
Beyond-Systems Investigation
• Call facility and inquire about alert
• Request current emergency department logs and latest laboratory results
• Call hospital infection control practitioner to obtain medical record numbers of patients who make up the alert; do chart review
• Contact patients (involved in the alert) directly
• Ask physicians to lower their threshold for diagnostic testing; obtain results
We classified alert response into 2 categories: within-systems and beyond-systems investigation. In a within-systems investigation, the data monitor uses only the tools available through syndromic surveillance or supportive biosurveillance systems. In a beyond-systems investigation, the monitor brings in additional resources such as hospital staff. Interviews revealed that although most alerts are subject to a within-systems investigation, health departments infrequently conduct beyond-systems investigations. Respondents indicated that on average 15% of alerts are tracked beyond the surveillance system. Five health departments responsible for investigation and response in their jurisdiction reported that less than 2% of their alerts are tracked beyond the system. Thus, health departments have thousands of hours of experience with data drill down, but much less experience with downstream steps in the investigation process. Typical quotations included the following: “There is no occasion to date where we called a facility about a particular case,” and “We never had to launch a full investigation. We rule things out by looking at the data.” “Ninety-nine percent of alerts are not investigated further than the line list.”
The leading reasons that health department respondents provided for having conducted so few in-depth investigations were ability to rule out anything suspicious by checking for obvious causes of false positives within the system, limited resources to respond to every signal, and concerns about inconveniencing hospital staff. “Our protocol focuses on receiving/acknowledging an alert and logging into the system. We have less focus on the investigation because we don’t want to agitate hospitals unnecessarily.” “Our original protocol is not followed closely because the infection control practitioners (ICPs) are uncooperative. It is too much work for them to track down patient records.” “The ICPs get frustrated if we call them too often.”
Although several written protocols outlined detailed notification policies, many interview respondents spoke of notification in a hypothetical manner because they either lacked actual experience in dealing with a truly suspicious alert or felt that because they had regular communication with community partners and internal staff, specific plans for notification were not necessary: “We don’t need a notification plan because the county is in daily communication with its hospitals even in the absence of an alert.” In general, health departments that described notification policies and identified decision makers were unable to provide concrete details regarding the timeline for response.
Several respondents indicated that their local health departments were reluctant to notify the state due to lack of state resources to support an investigation, lack of enthusiasm regarding syndromic surveillance at the state level, and belief that most events could be handled without state involvement. Furthermore, 2 health departments criticized the lack of state coordination across regions/jurisdictions: “The state should coordinate across regions, but it doesn’t.” “Alerts are at the geographic level, but if an alert crosses boundaries there is not state oversight (as there should be) over multijurisdictional issues.”
Study Objective 2: Framework for Response
In the course of the interview process, health departments submitted 11 written protocols (100%) for review by the research team. These protocols were used in conjunction with transcripts from interviews to derive a complete list of possible response protocol elements. Round 1 of the Delphi process identified a total of 68 protocol elements that were categorized into 6 distinct categories: description of system(s), monitoring policies, response procedures, policies on protocol revision, role of syndromic surveillance response plan within additional health department plans, and other. Round 2 highlighted 32 protocol elements within 5 distinct categories as essential elements according to expert consensus (essential elements are those that more than 50% of experts defined as essential, rather than desirable or not necessary):
Description of System
• Description of data sources
• List of participating facilities
• Detection algorithms
• Frequency of data updates/refresh
• Syndrome definitions
• Description of system uses/purposes
• Explanation of how different system uses/purposes impact response
Monitoring Policies
• Responsibilities of public health jurisdiction within the context of regional/multijurisdictional system
• Responsibilities of the data monitor
• Tasks for data monitor during business hours and after-hours
• Special instructions for data monitor in the absence of alerts (ie, reporting “business as usual”)
• Response deadlines/timeline from receipt of alert to action and notification
• Instructions for whether each syndrome is handled in the same manner (ie, special instructions for syndromes of interest)
• Description of data drill down (ie, how to describe the signal)
• Instructions for how to run free text queries if capable
• Confidentiality statement
Response Procedures
• Instructions for within-systems management of alerts:
• Criteria for prioritizing alerts (ie, distinguishing statistical significance from public health significance)
• Instructions regarding use of supportive biosurveillance systems
• Description of supportive biosurveillance systems
• Notification procedures for identification of a single case of reportable disease/important free text element within the data
• Options for beyond-systems investigation:
• Criteria for launching each stage of beyond-systems investigation (often captured in flowchart form)
• Contact information for partners in investigation
• Instructions (and relevant contact information) on internal communication procedures
• Instructions (and relevant contact information) on interfacing with other public health jurisdictions
• Instructions (and relevant contact information) on interfacing with law enforcement
• Templates for communicating essential information to other public health and safety jurisdictions
• Instructions for documentation of investigation steps and outcome
Role of Syndromic Surveillance Response Protocol Within Additional Health Department Plans/Protocols
• Criteria for activation of public health preparedness protocols in response to syndromic surveillance system alerts
• Criteria for activation of public safety/law enforcement protocols in response to syndromic surveillance system alerts
Other
• Instructions for when syndromic surveillance system is down/not functioning
DISCUSSION
Although public health entities have devoted significant human and financial resources to syndromic surveillance system creation and implementation, the usefulness of these systems will be limited without the necessary infrastructure and methods to conduct an effective response and the active participation and cooperation of partners at the hospital level. To our knowledge, this is the first study to describe existing response protocols across multiple jurisdictions or to provide systematically generated guidance on protocol development for health departments monitoring syndromic surveillance systems. Our study revealed that although many health departments lacked written protocols, consensus could be reached regarding essential protocol elements and components.
In-depth interviews with health department staff revealed that less than half of health departments had written response protocols and the majority of those with protocols had not updated them within 12 months. Interestingly, the health departments without protocols did not seem to regard their absence as a serious problem because response procedures were believed to be implicitly understood and practiced in the context of daily communications with community and public health partners. Although it is widely accepted that plans are an essential first step toward preparing for emergencies, there may be certain situations in which a written plan is more critical or others in which it is less critical. One possible interpretation of our findings is that health department staff may question the need for written protocols because the focus of syndromic surveillance may be shifting away gradually from early warning and toward situational awarenessReference Buehler, Sonricker and Paladini4, Reference Eban13 (and most protocols are focused on alerts), and because specific instances in which syndromic surveillance first detected previously unknown events of public health significance have not often been identified. Rather than deny the importance of a plan, which is especially needed in the case of the high impact, rare event, it may be advantageous for health departments to work toward developing protocols geared to their individual vision and goals for syndromic surveillance.
We argue that there are certain circumstances in which the importance of clear, written protocols cannot be overlooked. Examples include settings in which a health department is not in daily communication with local hospitals in the course of routine public health business, when a health department has multiple systems users with different monitoring responsibilities (eg, data monitoring responsibilities rotate among staff), when there is frequent turnover in staff, and when there are complicated jurisdictional issues regarding response. In states with shared systems in which locals and the state view the same data (50% of the case study states surveyed), and response can be initiated at different levels, we believe protocols are highly necessary. The fact that more than one-fifth of the health departments surveyed do not monitor systems after-hours may not be considered so concerning if their data are processed only during working hours (ie, no system/data updates occur after-hours) or they are part of statewide systems in which monitors in other jurisdictions provide after-hours coverage. It may be dangerous if established, agreed-upon protocols do not exist and health departments make the assumption that other jurisdictions are continuously examining their local data and prepared to respond. Without shared protocols, significant gaps in potentially critical data acquisition, interpretation, and response would exist.
In interviewing health department staff, we also found that less than 15% of alerts were tracked in a beyond-systems investigation; thus, although health departments have logged thousands of hours working with syndromic surveillance data within the system or supportive biosurveillance systems, they have much less experience gathering new information on alerts. Some respondents reported that they perceived resistance from hospital ICPs when they called them to investigate alerts, suggesting the need for educating relevant hospital personnel regarding the role of syndromic surveillance in public health practice, and the benefits of their participation. Even the most detailed written protocol is of little worth if it focuses exclusively on issues internal to the health department and makes assumptions about the motivations and barriers facing hospital staff who participate in investigation. Furthermore, infrequent follow-up in daily practice highlights the need for simulated exercises.
The in-depth description of existing protocols and Delphi exercise that we undertook in this study informed the development of a generalizable framework for use by public health departments. In the Delphi exercise, experts emphasized elements related to interfacing with systems and communicating findings, and deemphasized elements regarding evaluation of systems and plans and elements that may be described in other public health plans (eg, communications protocols). Although experts believed that the framework could be used by health departments to structure response and outline roles and responsibilities, several commented that a written protocol cannot take the place of specialized judgment and health departments needed to maintain flexibility in response within the context of a documented protocol. With the framework highlighted here, health departments can begin to apply vetted response policies or enhance existing ones; furthermore, having descriptions of the system(s), appropriate analyses, and investigation steps consolidated into a prototype comprehensive plan will provide material for training of new staff. The long-term development of detailed, field-tested protocols that include specific policies regarding monitoring, investigation, and communication will help to achieve system “portability,” a concept described in the Centers for Disease Control and Prevention’s syndromic surveillance evaluation framework.Reference Buehler, Hopkins and Overhage18 Furthermore, implementation of standardized protocol elements will assist local jurisdictions in integrating outbreak response at regional, state, and national levels. As examples such as Hurricane Katrina have revealed, coordination across numerous levels of government is critical to our national preparedness.19
Our study has several limitations, including our decision to use the state epidemiologist as the gatekeeper to identify active users of syndromic surveillance at the local level, and the issue of social desirability bias. First, it is possible that the state epidemiologist would not be aware of all of the syndromic surveillance users within the state, especially in states where public health is particularly decentralized. We addressed this limitation by also relying on snowball sampling and literature reviews. Second, because participants represent their health departments, they may have an interest in casting their activities and capabilities in the best possible light and therefore exaggerate their use of (and planning regarding) syndromic surveillance. The strengths of our study include the use of multiple data sources, large case study sample size, and the high response rate.
Further areas for research include evaluation of existing response protocols and how evolving uses for syndromic surveillance (eg, away from early warning) should affect established protocols. Because our study found that most written response protocols were seldom updated, we hypothesize that existing protocols do not reflect these trends.
The framework proposed in this study was based on the experiences of 30 health departments, review of written protocols, and the expert opinion of thought leaders in the field of syndromic surveillance response. Because technical syndromic surveillance capabilities must be tied to human resources for response to achieve actual preparedness, health departments can begin to use resources such as the guidance provided here to improve their abilities to act upon—not merely to detect—cases of, events of, and trends in public health significance. Next steps should include the creation of a written protocol clearinghouse in which peer-reviewed protocols could be posted and discussed, providing examples and thoughtful commentary for public health practitioners who have limited resources to devote to protocol development. By consulting guidance materials that are publicly available, health departments can begin to adopt common critical elements and revise protocols to support national preparedness goals.
APPENDIX 1
Quantitative Interview Data: Variables Included in Descriptive Statistics
• Total number of syndromic surveillance systems monitored
• Number and types of supportive systems monitored (used to verify signals from syndromic surveillance systems)
• Number of years each syndromic surveillance system has been online
• Existence of written response protocol (yes/no)
• Degree to which protocol is followed (Likert scale)
• Number of syndromic surveillance systems that are configured to alert
• Number of staff who receive alerts
• Method of receiving alerts (categorical)
• Order in which systems are monitored to verify alerts
• Average number of alerts generated per week
• Proportion of alerts that are immediately ruled to be of public health significance (eg, not false positives)
• Proportion of alerts that receive cursory within-systems investigation
• Proportion of alerts that receive full beyond-systems investigation
• Number of cases in the last year in which syndromic surveillance system notified the health department of an event of public health significance that was not previously known
• Number of times per day syndromic surveillance is monitored
• Number of staff who monitor syndromic surveillance data and log in to system
• Number of total hours spent by all staff in monitoring systems and responding to alerts (before full-blown epidemiological investigation)
• Total nonpersonnel costs to maintain system (eg, costs of data provider)
APPENDIX 2
Full Elements List—Essential and Nonessential Elements
Description of System
• Description of data sources
• Description of types of users
• Restrictions in access
• List of participating facilities
• Geographic and population coverage statistics
• Detection algorithms
• Frequency of data updates/refresh
• Syndrome definitions
• Description of system uses/purposes
• Approaches to mapping (options for levels)
• Sources of baseline data
• Description of system’s purpose(s)
• Explanation of how different system uses/purposes impact response
• Limitations of system
Monitoring Policies
• Responsibilities of public health jurisdiction within the context of regional/multijurisdictional system
• Responsibilities of the data monitor
• Tasks for data monitor during business hours and after-hours
• Special instructions for data monitor in the absence of alerts (ie, reporting “business as usual”)
• Response deadlines/timeline from receipt of alert to action and notification
• Instructions for whether each syndrome is handled in the same manner (ie, special instructions for syndromes of interest)
• Description of data drill down (ie, how to describe the signal)
• Examples of types of graphs/tables/maps to generate
• Instructions for how to run free text queries if capable
• Explanation of authority of data monitor to take investigation steps and make decisions
• Confidentiality statement
Response Procedures
• Instructions for within-systems management of alerts
• Criteria for prioritizing alerts (ie, distinguishing statistical significance from public health significance)
• Instructions regarding use of supportive biosurveillance systems
• Description of supportive biosurveillance systems
• Strengths and weaknesses of supportive systems in verifying syndromic surveillance alerts
• Order in which supportive systems should be reviewed
• Recommendations for handling disagreement among data points/sources
• Description of common causes of false positives
• Any special instructions for events/times of interest (eg, political convention, Olympics)
• Any special instructions of syndromes of interest/syndrome-specific policies
• Instructions for changing the syndrome definition
• Notification procedures for identification of a single case of reportable disease/important free text element within the data
• Options for beyond-systems investigation/description of each option for investigating suspicious anomalies
• Criteria for launching each stage of beyond-systems investigation (often captured in flowchart form)
• Contact information for partners in investigation
• Special instructions and advice for interfacing with hospital staff and likely challenges
• Instructions (and relevant contact information) on internal communication procedures
• Instructions (and relevant contact information) on interfacing with other public health jurisdictions
• Instructions (and relevant contact information) on interfacing with law enforcement
• Templates for communicating essential information to other public health and safety jurisdictions
• Templates for risk communication in communicating essential information to the public
• Reverse notification instructions: if the hospital actively monitors its own data, process for notifying local monitoring agency
• Instructions for documentation of investigation steps and outcome
• Evaluation procedures:
• Criteria for acceptable performance
• Procedures for assessment of logs and other records
• Procedures for assessment of contributions of the system
• Description of who is responsible for evaluation and how often evaluation occurs
Policies on Protocol Updating/Revision
• Criteria for updating written protocol
• Frequency of updating
• Approval process for modifying written protocol
Role of Syndromic Surveillance Response Protocol Within Additional Health Department Plans/Protocols
• Criteria for activation of public health preparedness protocols in response to syndromic surveillance system alerts
• Criteria for activation of public safety/law enforcement protocols in response to syndromic surveillance system alerts
Other
• Instructions for when syndromic surveillance system is down/not functioning
Authors' Disclosures The authors report no conflicts of interest.