Persons using assistive technology might not be able to fully access information in this file. For assistance, please send e-mail to: email@example.com. Type 508 Accommodation in the subject line of e-mail.
Framework for Evaluating Public Health Surveillance Systems for Early Detection of Outbreaks
Recommendations from the CDC Working Group
The material in this report originated in the Epidemiology Program Office, Stephen B. Thacker, M.D., Director, and the Division of Public Health Surveillance and Informatics, Daniel M. Sosin, M.D., Director.
The threat of terrorism and high-profile disease outbreaks has drawn attention to public health surveillance systems for
early detection of outbreaks. State and local health departments are enhancing existing surveillance systems and developing
new systems to better detect outbreaks through public health surveillance. However, information is limited about the usefulness
of surveillance systems for outbreak detection or the best ways to support this function. This report supplements previous guidelines for evaluating public health surveillance systems. Use of this framework is intended to improve decision-making regarding the implementation of surveillance for outbreak detection. Use of a standardized evaluation methodology, including description
of system design and operation, also will enhance the exchange of information regarding methods to improve early detection
of outbreaks. The framework directs particular attention to the measurement of timeliness and validity for outbreak detection. The evaluation framework is designed to support assessment and description of all surveillance approaches to early
detection, whether through traditional disease reporting, specialized analytic routines for aberration detection, or surveillance using early indicators of disease outbreaks, such as syndromic surveillance.
Public health surveillance is the ongoing, systematic collection, analysis, interpretation, and dissemination of data about a health-related event for use in public health action to reduce morbidity and mortality and to improve health (1). Surveillance serves at least eight public health functions. These include supporting case detection and public health interventions, estimating the impact of a disease or injury, portraying the natural history of a health condition, determining the distribution and spread of illness, generating hypotheses and stimulating research, evaluating prevention and control measures, and facilitating planning (2). Another important public health function of surveillance is outbreak detection (i.e., identifying an increase in frequency of disease above the background occurrence of the disease).
Outbreaks typically have been recognized either based on accumulated case reports of reportable diseases or by clinicians and laboratorians who alert public health officials about clusters of diseases. Because of the threat of terrorism and the increasing availability of electronic health data, enhancements are being made to existing surveillance systems, and new surveillance systems have been developed and implemented in public health jurisdictions with the goal of early and complete detection of outbreaks (3). The usefulness of surveillance systems for early detection and response to outbreaks has not been established, and substantial costs can be incurred in developing or enhancing and managing these surveillance systems and investigating false alarms (4). The measurement of the performance of public health surveillance systems for outbreak detection is needed to establish the relative value of different approaches and to provide information needed to improve their efficacy for detection of outbreaks at the earliest stages.
This report supplements existing CDC guidelines for evaluating public health surveillance systems (1). Specifically, the report provides a framework to evaluate timeliness for outbreak detection and the balance among sensitivity, predictive value positive (PVP), and predictive value negative (PVN) for detecting outbreaks. This framework also encourages detailed description of system design and operations and of their experience with outbreak detection.
The framework is best applied to systems that have data to demonstrate the attributes of the system under
consideration. Nonetheless, this framework also can be
applied to systems that are in early stages of development or in the planning phase
by using citations from the published literature to support conclusions. Ideally, the evaluation should compare the
performance of the surveillance system under scrutiny to alternative surveillance systems and produce an assessment of the relative usefulness for early detection of outbreaks.
Early detection of outbreaks can be achieved in three ways: 1) by timely and complete receipt, review, and investigation of disease case reports, including the prompt recognition and reporting to or consultation with health departments by physicians, health-care facilities, and laboratories consistent with disease reporting laws or regulations; 2) by improving the ability to recognize patterns indicative of a possible outbreak early in its course, such as through analytic tools that improve the predictive value of data at an early stage of an outbreak or by lowering the threshold for investigating possible outbreaks; and 3) through receipt of new types of data that can signify an outbreak earlier in its course. These new types of data might include health-care product purchases, absences from work or school, presenting symptoms to a health-care provider, or laboratory test orders (5).
Disease Case Reports
The foundation of communicable disease surveillance in the United States is the state and local application of the reportable disease surveillance system known as the National Notifiable Disease Surveillance System (NNDSS), which includes the listing of diseases and laboratory findings of public health interest, the publication of case definitions for their surveillance, and a system for passing case reports from local to state to CDC. This process occurs best where two-way communication occurs between public health agencies and the clinical community: clinicians and laboratories report cases and clusters of reportable and unusual diseases, and health departments consult on case diagnosis and management, alerts, surveillance summaries, and clinical and public health recommendations and policies. Faster, more specific and affordable diagnostic methods and decision-support tools for diseases with substantial outbreak potential could improve the timely recognition of reportable diseases. On-going health-care provider and laboratory outreach, education, and 24-hour access to public health professionals are needed to enhance reporting of urgent health threats. Electronic laboratory reporting (i.e., the automated transfer of designated data from a laboratory database to a public health data repository using a defined message structure) also will improve the timeliness and completeness of reporting notifiable conditions (6--8) and can serve as a model for electronic reporting of a wider range of clinical information. A comprehensive surveillance effort supports timely investigation (i.e., tracking of cases once an outbreak has been recognized) and data needs for managing the public health response to an outbreak or terrorist event.
Statistical tools for pattern recognition and aberration detection can be applied to screen data for patterns warranting further public health investigation and to enhance recognition of subtle or obscure outbreak patterns (9). Automated analysis and visualization tools can lessen the need for frequent and intensive manual analysis of surveillance data.
New Data Types
Many new surveillance systems, loosely termed syndromic surveillance systems, use data that are not diagnostic of a disease but that might indicate the early stages of an outbreak. The scope of this framework is broader than these novel systems, yet the wide-ranging definitions and expectations of syndromic surveillance require clarification. Syndromic surveillance for early outbreak detection is an investigational approach where health department staff, assisted by automated data acquisition and generation of statistical signals, monitor disease indicators continually (real-time) or at least daily (near real-time) to detect outbreaks of diseases earlier and more completely than might otherwise be possible with traditional public health methods (e.g., by reportable disease surveillance and telephone consultation). The distinguishing characteristic of syndromic surveillance is the use of indicator data types. For example, a laboratory is a data source that can support traditional disease case reporting by submitting reports of confirmatory laboratory results for notifiable conditions; however, test requests are a type of laboratory data that might be used as an outbreak indicator by tracking excess volume of test requests for diseases that typically cause outbreaks. New data types have been used by public health to enhance surveillance, reflecting events that might precede a clinical diagnosis (e.g., patient's chief complaints in emergency departments, clinical impressions on ambulance log sheets, prescriptions filled, retail drug and product purchases, school or work absenteeism, and constellations of medical signs and symptoms in persons seen in various clinical settings).
Outbreak detection is the overriding purpose of syndromic surveillance for terrorism preparedness. Enhanced case-finding and monitoring the course and population characteristics of a recognized outbreak also are potential benefits
of syndromic surveillance (4). A manual syndromic surveillance system was used to detect additional anthrax cases in the fall
of 2001 when the outbreak was recognized
(10). Complicating the understanding of syndromic surveillance is that
syndromes have been used for case detection and management of diseases when the condition is infrequent and the syndrome is
relatively specific for the condition of interest. Acute flaccid paralysis is a syndromic marker for poliomyelitis and is used to detect single cases of suspected polio in a timely way to initiate investigation and control measures. In this case, the syndrome is
relatively uncommon and serious and serves as a proxy for polio
(11). Syndromes also have been used effectively for surveillance
in resource-poor settings for sexually transmitted disease detection and control where laboratory confirmation is not possible or practical (12). However, syndromic surveillance for terrorism is not intended for early detection of single cases or limited outbreaks because the early clinical manifestations of diseases that might be caused by
terrorism are common and nonspecific (13).
This framework is intended to support the evaluation of all public health surveillance systems for the timely detection
of outbreaks. The framework is organized into four categories: system description, outbreak detection, experience,
and conclusions and recommendations. A comprehensive evaluation will address all four categories.
1. Purpose. The purpose(s) of the system should be explicitly and clearly described and should include the intended uses of the system. The evaluation methods might be prioritized differently for different purposes. For example, if terrorism is expected to be rare, reassurance might be the primary purpose of the terrorism surveillance system. However, for reassurance to be credible, negative results must be accurate and the system should have a demonstrated ability to detect outbreaks of the kind and size being dismissed.
The description of purpose should include the indications for implementing the system; whether the system is designed for short-term, high-risk situations or long-term, continuous use; the context in which the system operates (whether it stands alone or augments data from other surveillance systems); what type of outbreaks the system is intended to detect; and what secondary functional value is desired. Designers of the system should specify the desired sensitivity and specificity of the system and whether it is intended to capture small or large events.
2. Stakeholders. The stakeholders of the system should be listed. Stakeholders include those who provide data for the system and those who use the information generated by the system (e.g., public health practitioners; health-care providers; other health-related data providers; public safety officials; government officials at local, state, and federal levels; community residents; nongovernmental organizations; and commercial systems developers). The stakeholders might vary among different systems and might change as conditions change. Listing stakeholders helps define who the system is intended to serve and provides context for the evaluation results.
3. Operation. All aspects of the operation of the syndromic surveillance system should be described in detail to
allow stakeholders to validate the description of the system and for other interested parties to understand the complexity
and resources needed to operate such a system. Detailed system description also will facilitate evaluation by highlighting
variations in system operation that are relevant to variations in system performance (Figure 1). Such a conceptual model can facilitate the description of the system. The description of the surveillance process should address
1) systemwide characteristics (data flow [Figure 2]), including data and transmission standards to facilitate
interoperability and data sharing between
information systems, security, privacy, and confidentiality; 2) data sources (used broadly in this framework to include the data-producing facility [i.e., the entity sharing data with the public health surveillance system], the data type [e.g., chief complaint, discharge diagnosis, laboratory test order], and the data format [e.g., electronic or paper, text descriptions of events or illnesses, or structured data reworded or stored in standardized format]); 3) data processing before analysis (the data collation, filtering, transformation, and routing functions required for public health to use the data, including the classification and assigning of syndromes); 4) statistical analysis (tools for automated screening of data for potential outbreaks); and
5) epidemiologic analysis, interpretation, and investigation (the rules, procedures, and tools that support decision-making in response to a system signal, including adequate staffing with trained epidemiologists who can review, explore, and interpret the data in
a timely manner).
The ability of a system to reliably detect an outbreak at the earliest possible stage depends on the timely capture and processing of the data produced by transactions of health behaviors (e.g., over-the-counter pharmaceutical sales, emergency department visits, and nurse call-line volume) or health-care activities (e.g., laboratory test volume and triage categorization of chief complaint) that might indicate an outbreak; the validity of the data for measuring the conditions of interest at the earliest stage of illness and the quality of those data; and the detection methods applied to these processed surveillance data to distinguish expected events from those indicative of an outbreak.
1. Timeliness. The timeliness of surveillance approaches for outbreak detection is measured by the lapse of time from exposure to the disease agent to the initiation of a public health intervention. A timeline with interim milestones is proposed to improve the specificity of timeliness measures (Figure 3). Although measuring all of the time points that define the intervals might be impractical or inexact in an applied outbreak setting, measuring intervals in a consistent way can be used to compare alternative outbreak-detection approaches and specific surveillance systems.
2.Validity. Measuring the validity of a system for outbreak detection requires an operational definition of an outbreak. Although a statistical deviation from a baseline rate can be useful for triggering further investigation, it is not sufficient for defining an outbreak. In practice, the confirmation of an outbreak is a judgment that depends on past experience with the condition, the severity of the condition, the communicability of the condition, confidence in the diagnosis of the condition, public health concern about outbreaks at the time, having options for effective prevention or control, and the resources required and available to respond. Operationally, an outbreak is defined by the affected public health jurisdiction when the occurrence of a condition has changed sufficiently to warrant public health attention.
The validity of a surveillance system for outbreak detection varies according to the outbreak scenario and surveillance system factors. These factors can confound the comparison of systems and must be carefully described in the evaluation. For example, the minimum size of an outbreak that can be detected by a system cannot be objectively compared among systems unless they are identical or differences are accounted for in several ways.
Different approaches to outbreak detection need to be evaluated under the same conditions to isolate the unique features of the system (e.g., data type) from the outbreak characteristics and the health department capacity. The data needed to evaluate and compare the performance of surveillance systems for early outbreak detection can be obtained from naturally occurring outbreaks or through simulation.
Controlled comparisons of surveillance systems for detection of deliberately induced outbreaks will be difficult because of the infrequency of such outbreaks and the diversity of systems and outbreak settings. However, understanding the value of different surveillance approaches to early detection will increase as descriptions of their experience with detecting and missing naturally occurring outbreaks accumulate. Accumulation of experience descriptions is made more difficult by not having standard methods for measuring outbreak detection successes and failures across systems and by the diversity of surveillance system and outbreak factors that influence performance. Standardized classification of system and outbreak factors will enable comparison of experiences across systems. Pending the development of classification standards, descriptive evaluation should include as much detail as possible. Proxy outbreak scenarios reflect the types of naturally occurring outbreaks that should not be missed to instill confidence in the ability of these systems to detect outbreaks caused by terrorism. Examples of proxy events or outbreaks include seasonal events (e.g., increases in influenza, norovirus gastroenteritis, and other infectious respiratory agents) and community outbreaks (e.g., foodborne, waterborne, hepatitis A, child-care--associated shigellosis, legionellosis, and coccidioidomycosis and histoplasmosis in areas where the diseases are endemic).
The measurement of outbreaks detected, false alarms, and outbreaks missed or detected late should be designed as a routine part any system workflow and conducted with minimal effort or complexity. Routine reporting should be automated where possible. Relevant information needs include: the number of statistical aberrations detected at a set threshold in a defined period of time (e.g., frequency per month at a given p-value); the action taken as a result of the signals (e.g., review for data errors, in-depth follow-up analysis of the specific conditions within the syndrome category, manual epidemiologic analysis to characterize a signal, examining data from other systems, and increasing the frequency of reporting from affected sites); resources directed to the follow-up of the alert; public health response that resulted (e.g., an alert to clinicians, timely dissemination of information to other health entities, a vaccination campaign, or no further response); documentation of how every recognized outbreak in the jurisdiction was detected; an assessment of the value of the follow-up effort (e.g., the effort was an appropriate application of public health resources); a detailed description of the agent, host, and environmental conditions of the outbreak; and the number of outbreaks detected only late in their course or in retrospect.
To evaluate the relative value of different methods for outbreak detection, a direct comparison approach is needed. For example, if a health department detects a substantial number of its outbreaks through telephone consultations, then a phone call tracking system might produce the data needed to compare telephone consults with other approaches for early detection of outbreaks.
As an alternative to naturally occurring outbreaks, simulations can allow for the control and modification of agent, host, and environmental factors to study system performance across a range of common scenarios. However, simulations are limited in their ability to mimic the diversity and unpredictability of real-life events. Whenever possible, simulated outbreaks should be superimposed on historical trend data. To evaluate detection algorithms comparatively, a shared challenge problem and data set would be helpful. Simulation is limited by the availability of well-documented outbreak scenarios (e.g., organism or agent characteristics, transmission characteristics, and population characteristics). Simulations should incorporate data for each of the factors described previously. Multiple simulation runs should be used to test algorithm performance in different outbreak scenarios, allowing for generation of operating characteristic curves that reflect performance in a range of conditions.
Focused studies to validate the performance of limited aspects of systems (e.g., data sources, case definitions, statistical methods, and timeliness of reporting) can provide indirect evidence of system performance. Component studies also can test assumptions about outbreak scenarios and support better data simulation. Syndrome case definitions for certain specific data sources need to be validated. Component validation studies should emphasize outbreak detection over case detection. These studies contain explicit hypotheses and research questions and should be shared in a manner to advance the development of outbreak detection systems without unnecessary duplication.
Statistical Assessment of Validity
Surveillance systems must balance the risk for an outbreak, the value of early intervention, and the finite resources for investigation. Perceived high risk and high value of timely detection support high sensitivity and low thresholds for investigation. A low threshold can prompt resource-intensive investigations and occupy vital staff, and a high threshold might delay detection and intervention. The perceived threat of an outbreak, the community value attached to early detection, and the investigation resources available might vary over time. As a result, specifying a fixed relation between optimal sensitivity and predictive value for purposes of evaluation might be difficult.
The sensitivity and PVP and PVN are closely linked and considered together in this framework. Sensitivity is the percentage of outbreaks occurring in the jurisdiction detected by the system. PVP reflects the probability of a system signal being an outbreak. PVN reflects the probability that no outbreak is occurring when the system does not yield a signal. The calculation of sensitivity and predictive value is described in detail in the updated guidelines for evaluating public health surveillance systems (1). Measurement of sensitivity requires an alternative data source of high quality (e.g., "gold" standard) to confirm outbreaks in the population that were missed by the surveillance system. Sensitivity for outbreak detection could be assessed through capture-recapture techniques with two independent data sources (14). The high costs associated with responding to false alarms and with delayed response to outbreaks demand efforts to quantify and limit the impact of both. As long as the likelihood of terrorism is extremely low, PVP will remain near zero and a certain level of nonterrorism signals will be a necessary part of conducting surveillance for the detection of terrorism. Better performance can be achieved in one attribute (e.g., sensitivity) without a performance decrement in another (e.g., PVP) by changing the system (e.g., adding a data type or applying a better detection algorithm). Improving sensitivity by lowering the cut-off for signaling an outbreak will reduce PVP. Sensitivity and PVP for these surveillance systems will ultimately be calibrated in each system to balance the secondary benefits (e.g., detection of naturally occurring outbreaks, disease case finding and management, reassurance of no outbreak during periods of heightened risk, and a stronger reporting and consultation relation between public health and clinical medicine) with the locally acceptable level of false alarms.
The validity of syndromic surveillance system data is dependent on data quality. Error-prone systems and data prone to inaccurate measurement can negatively affect detection of unusual trends. Although data quality might be a less critical problem for screening common, nonspecific indicators for statistical aberrations, quality should be evaluated and improved to the extent possible. Measuring data quality is dependent on a standard (e.g., medical record review or fabricated test data with values known to the evaluator). The updated guidelines for evaluating public health surveillance systems (1) describe data quality in additional detail.
C. System Experience
The performance attributes described in this section convey the experience that has accrued in using the system.
1. System usefulness. A surveillance system is useful for outbreak detection depending on its contribution to the early detection of outbreaks of public health significance that leads to an effective intervention. An assessment of usefulness goes beyond detection to address the impact or value added by its application. Measurement of usefulness is inexact. As with validity, measurement will benefit from common terminology and standard data elements. In the interim, detailed efforts to describe and illustrate the consequences of early detection efforts will improve understanding of their usefulness.
Evaluation should begin with a review of the objectives of the system and should consider the priorities. To the extent possible, usefulness should be described by the disease prevention and control actions taken as a result of the analysis and interpretation of the data from the system.
The impact of the surveillance system should be contrasted with other mechanisms available for outbreak detection. An assessment of usefulness should list the outbreaks detected and the role that different methods played in the identification of each one. Examples of how the system has been used to detect or track health problems other than outbreaks in the community should be included. The public health response to the outbreaks and health problems detected should be described as well as how data from new or modified surveillance systems support inferences about disease patterns that would not be possible without them.
Surveillance systems for early outbreak detection are sometimes justified for the reassurance they provide when aberrant patterns are not apparent during a heightened risk period or when the incidence of cases declines during an outbreak. When community reassurance is claimed as a benefit of the surveillance system, reassurance should be defined and the measurement quantified (e.g., number of phone calls from the public on a health department hotline, successful press conferences, satisfaction of public health decision-makers, or resources to institutionalize the new surveillance system). A description should include who is reassured and of what they are reassured, and reassurance should be evaluated for validity by estimating the PVN.
2. Flexibility. The flexibility of a surveillance system refers to the system's ability to change as needs change. The adaptation to changing detection needs or operating conditions should occur with minimal additional time, personnel, or other resources. Flexibility generally improves the more data processing is handled centrally rather than distributed to individual data-providing facilities because fewer system and operator behavior changes are needed. Flexibility should address the ability of the system to apply evolving data standards and code sets as reflected in Public Health Information Network (PHIN) standards (http://www.cdc.gov/phin). Flexibility includes the adaptability of the system to shift from outbreak detection to outbreak management. The flexibility of the system to meet changing detection needs can include the ability to add unique data to refine signal detection, to capture exposure and other data relevant to managing an outbreak, to add data providers to increase population coverage and detect or track low frequency events, to modify case definitions (the aggregation of codes into syndrome groupings), to improve the detection algorithm to filter random variations in trends more efficiently, and to adjust the detection threshold. Flexibility also can be reflected by the ability of the system to detect and monitor naturally occurring outbreaks in the absence of terrorism. System flexibility is needed to balance the risk for an outbreak, the value of early intervention, and the resources for investigation as understanding of these factors changes.
3. System acceptability. As with the routine evaluation of public health surveillance systems (1), the acceptability of a surveillance system for early outbreak detection is reflected by the willingness of participants and stakeholders to contribute to the data collection and analysis. This concept includes the authority and willingness to share electronic health data and should include an assessment of the legal basis for the collection of prediagnosis data and the implications of privacy laws (e.g., Health Insurance Portability and Accountability Act Privacy Rule) (15). All states have broad disease-reporting laws that require reporting of diseases of public health importance, and many of these laws appear compatible with the authority to receive syndromic surveillance data (16). The authority to require reporting of indicator data for persons who lack evidence of a reportable condition and in the absence of an emergency is less clear and needs to be verified by jurisdictions. Acceptability can vary over time as the threat level, perceived value of early detection, support for the methods of surveillance, and resources fluctuate.
Acceptability of a system can be inferred from the extent of its adoption. Acceptability is reflected by the participation rate of potential reporting sources, by the completeness of data reporting, and by the timeliness of person-dependent steps in the system (e.g., manual data entry from emergency department logs as distinguished from electronic data from the normal clinical workflow).
4. Portability. The portability of a surveillance system addresses how well the system could be duplicated in another setting. Adherence to the PHIN standards can enhance portability by reducing variability in the application of information technology between sites. Reliance on person-dependent steps, including judgment and action criteria (e.g., for analysis and interpretation) should be fully documented to improve system portability. Portability also is influenced by the simplicity of the system. Examples should be provided of the deployment of similar systems in other settings, and the experience of those efforts should be described. In the absence of examples, features of the system that might support or detract from portability should be described.
5. System stability. The stability of a surveillance system refers to its resilience to system changes (e.g., change in coding from International Classifications of Disease, Ninth Revision [ICD-9] to ICD-10). Stability can be demonstrated by the duration and consistent operation of the system. System stability is distinguished from the reliability of data elements within the system. The consistent representation of the condition under surveillance (reliability) is an aspect of data quality. Stability can be measured by the frequency of system outages or downtime for servicing during periods of need, including downtime of data providers, the frequency of personnel deficiencies from staff turnover, and budget constraints. Ongoing support by system designers and evolving software updates might improve system stability. Stability also can be reflected in the extent of control over costs and system changes that the sponsoring agency maintains.
6. System costs. Cost is a vital factor in assessing the relative value of surveillance for terrorism preparedness. Cost-effectiveness analyses and data modeling are needed under a range of scenarios to estimate the value of innovations in surveillance for outbreak detection and terrorism preparedness (17). Improved methods of measuring cost and impact are needed. Costs borne by data providers should be noted; however, the cost perspective should be that of the community (societal perspective) to account for costs of prevention and treatment born by the community.
Direct costs include the fees paid for software and data, the personnel salary and support expenses (e.g., training, equipment support, and travel), and other resources needed to operate the system and produce information for public health decisions (e.g. office supplies, Internet and telephone lines, and other communication equipment). Fixed costs for running the system should be differentiated from the variable costs of responding to system alarms. Variable costs include the cost of follow-up activities (e.g., for diagnosis, case-management, or community interventions). The cost of responding to false alarms represents a variable but inherent inefficiency of an early detection system that should be accounted for in the evaluation. Similarly, variable costs include the financial and public health costs of missing outbreaks entirely or recognizing them late. Costs vary because the sensitivity and timeliness of the detection methods can be modified according to changes in tolerance for missing outbreaks and for responding to false alarms. Similarly, the threshold and methods for investigating system alarms can vary with the perceived risk and need to respond. Costs from public health response to false alarms with traditional surveillance systems need to be measured in a comparable way when assessing the relative value of new surveillance methods. Cost savings should be estimated by assessing the impact of prevention and control efforts (e.g., health-care costs and productivity losses averted) Questions to answer include the following:
D. Conclusions and Recommendations for Use and Improvement of Systems for Early Outbreak Detection
The evaluation should be summarized to convey the strengths and weaknesses of the system under scrutiny. Summarizing and reporting evaluation findings should facilitate the comparison of systems for those making decisions about new or existing surveillance methods. These conclusions should be validated among stakeholders of the system and modified accordingly. Recommendations should address adoption, continuation, or modification of the surveillance system so that it can better achieve its intended purposes. Recommendations should be disseminated widely and actively interpreted for all appropriate audiences.
An Institute of Medicine study concluded that although innovative surveillance methods might be increasingly helpful
in the detection and monitoring of outbreaks, a balance is needed between strengthening proven approaches (e.g., diagnosis
of infectious illness and strengthening the liaison
between clinical-care providers and health departments) and the
exploration and evaluation of new approaches
(17). Guidance for the evaluation of surveillance systems for outbreak detection is
on-going. Many advances are needed in understanding of systems and outbreak characteristics to improve performance metrics. For example, research is needed to understand the personal health and clinical health care
behaviors that might serve as early indicators of priority diseases; analytic methods are needed to improve pattern recognition and to integrate multiple streams of data; a shared vocabulary is needed for describing outbreak conditions, managing text-based information, and supporting
case definitions; and evaluation research is needed, including
cost-effectiveness of different surveillance models for early
detection, both in real-life comparisons and in simulated data environments, to characterize the size and nature of epidemics that can
be detected through innovative surveillance
approaches. Pending more robust measures of system performance, the goal of
this framework is to improve public health surveillance systems for early outbreak
detection by providing practical guidance for
This report includes contributions by Daniel S. Budnitz, M.D., National Center for Injury Prevention and Control; Richard
L. Ehrenberg, M.D., National Institute for Occupational Safety and Health;
Timothy Doyle, Robert R. German, Dr.P.H., Timothy
A. Green, Ph.D., Samuel L. Groseclose, D.V.M., Division of Public Health Surveillance and Informatics,
Denise Koo, M.D., Division of Applied Public Health Training, Carol A. Pertowski, M.D., Stephen B. Thacker M.D., Epidemiology Program Office; José G. Rigau-Pérez, M.D., National Center for Infectious Diseases, CDC, Atlanta, Georgia. Melvin A. Kohn, M.D., State Epidemiologist, Oregon Department of Human Services, Portland. Steven C. Macdonald, Ph.D., Office of Epidemiology, Washington State Department
of Health, Olympia. Nkuchia M. M'ikanatha, Dr.P.H., Pennsylvania Department of Health and Department, Harrisburg. Kelly
Henning, M.D., New York City Department of Health and Mental Hygiene.
Dan E. Peterson, M.D., Cereplex, Inc., Gaithersburg,
Maryland. Michael L. Popovich, Scientific Technologies Corporation, Tucson, Arizona. Scott F. Wetterhall, M.D., DeKalb County Board of
Health, Decatur, Georgia. Christopher W. Woods, M.D., Division of Infectious Diseases, Duke University Medical Center, Durham,
CDC Evaluation Working Group on Public Health Surveillance Systems For Early Detection of Outbreaks
Chair: Dan Sosin, M.D., Division of Public Health Surveillance and Informatics, Epidemiology Program Office, CDC.
Disclaimer All MMWR HTML versions of articles are electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML version. Users should not rely on this HTML document, but are referred to the electronic PDF version and/or the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices.
**Questions or messages regarding errors in formatting should be addressed to firstname.lastname@example.org.
Page converted: 5/5/2004
This page last reviewed 5/5/2004