Skip Navigation LinksSkip Navigation Links
Centers for Disease Control and Prevention
Safer Healthier People
Blue White
Blue White
bottom curve
CDC Home Search Health Topics A-Z spacer spacer
Blue curve MMWR spacer

Persons using assistive technology might not be able to fully access information in this file. For assistance, please send e-mail to: Type 508 Accommodation and the title of the report in the subject line of e-mail.

Assessing the National Electronic Injury Surveillance System -- Cooperative Adverse Drug Event Surveillance Project --- Six Sites, United States, January 1--June 15, 2004

Adverse drug events (ADEs) occur when therapeutic drugs have injurious effects; current systems for conducting national ADE surveillance are limited, and current national estimates of ADE incidence are problematic (1). In 2003, CDC, in collaboration with the Consumer Product Safety Commission (CPSC) and the Food and Drug Administration (FDA), created the National Electronic Injury Surveillance System -- Cooperative Adverse Drug Event Surveillance (NEISS-CADES) project by adding active surveillance of ADEs to the National Electronic Injury Surveillance System -- All Injury Program (NEISS-AIP). Because ADEs can be more difficult to identify than other injuries, an independent chart review in a sample of six NEISS-CADES hospitals was conducted to evaluate the sensitivity and predictive value positive (PVP) of ADE identification. This report describes the results of that evaluation, which indicated that although PVP for ADEs was high, the sensitivity was low, particularly for certain types of ADEs. As a result of these findings, additional training on identifying and reporting ADEs was initiated for all NEISS-CADES hospital coders. As more persons in the United States use drug therapies, active, postmarketing surveillance of ADEs can help identify safety problems and guide prevention efforts.

NEISS-CADES is a nationally representative subsample of 64 of 98 NEISS hospitals selected as a stratified probability sample of U.S. hospitals with a minimum of six beds and a 24-hour emergency department (ED) (2). At each of the 64 hospitals, coders trained by CPSC and CDC staff review all ED charts for ADEs. Coders identify cases by looking for keywords and diagnoses, such as "medication reaction," "overdose," and "adverse effect," and record information into a standardized, computer-based data-entry system. Cases are defined as those occurring in persons who sought ED care for injuries linked by the treating physician to the outpatient use of a drug or drug-specific adverse effects. This case definition excludes drug withdrawal, drug abuse, self-harm attempts, lack of therapeutic effect, and effects of medications administered in the ED. Drugs include prescription medications, over-the-counter medications, vaccines, vitamins, and nutritional supplements.

For this evaluation, a convenience sample of six NEISS-CADES hospitals was selected from 14 hospitals with scheduled site visits in the summer of 2004 and the capability to provide a sufficient number of randomly selected medical charts for review. Hospitals were selected to represent a range of ADE reporting (0.2%--1.7% of ED visits) and a range of hospital sizes* (three very large, one large, one medium, and one small hospital). Large metropolitan (one hospital), smaller metropolitan (three hospitals), and rural areas (two hospitals), and five of nine U.S. census geographic divisions were represented. The sample did not include any pediatric specialty hospitals. At each hospital, ED charts were retrieved for review from a list of randomly selected dates during the period January 1--June 15, 2004. Up to 1,200 charts or up to 20 days of charts were retrieved on the basis of the ED volume of each hospital. Because of limitations in medical record archiving systems, charts were not retrievable for six (10%) of 61 dates initially selected, and alternate dates were selected as substitutes. Of 4,719 ED visits identified for the dates selected, charts for 4,561 (97%) visits were available for review.

Chart reviewers used the same standardized methodology as coders. Each available chart was reviewed by two reviewers experienced in medical record abstraction and ADE surveillance (i.e., an epidemiologist with training in medical terminology and a physician board-certified in internal medicine) independent of each other and of the NEISS hospital coder. For ADE cases, each reviewer recorded event descriptions and associated drugs. Conflicting reviews were resolved by a third person (a physician board-certified in internal and emergency medicine). A sample kappa statistic was calculated by using statistical software to assess agreement of case identification between the two primary reviewers (3). Using the review process described in this report as the "gold standard," sensitivity (i.e., the proportion of cases detected by the surveillance system) and PVP (i.e., the proportion of coder-reported cases that actually had a drug-related event) for the six-hospital composite were calculated by using ratio estimation (4). These statistics were calculated as ratio estimates, assuming a stratified cluster sampling design, with hospitals forming strata and dates forming clusters. The charts reviewed from each ED were assigned weights according to the fraction of dates reviewed out of the January 1--June 15 sampling frame and the fraction of cases for which charts were available for each date reviewed.

A total of 68 ADE cases were identified by expert review of the 4,561 ED charts (weighted estimate: 1.4%) (Table). Ten cases were initially identified by only one of two reviewers (seven identified by one reviewer and three identified by the other), with a sample kappa statistic of 0.92 (95% confidence interval [CI] = 0.87--0.97), indicating a high level of nonchance agreement between reviewers. The median age of patients with ADEs was 57 years (range: 15 months--100 years), and 53% were female.

A total of 29 ADE cases had been reported to NEISS-CADES before the charts were reviewed. Of these, 25 were among the 68 ADE cases detected by the reviewers, whereas the remaining four were false-positive cases in which an injury was attributed to a drug in the chief complaint section of the chart but was not confirmed elsewhere in the chart. The weighted estimate of coder sensitivity for ascertaining ADE cases was 0.33 (CI = 0.23--0.44). The weighted estimate of PVP for coder-reported ADEs was 0.92 (CI = 0.85--1.00). The relatively low overall coder sensitivity was attributed in part to low sensitivity for detecting cases of hypoglycemia associated with diabetes agents (three of 16 detected) and bleeding associated with anticoagulants (e.g., warfarin and heparin) (one of nine detected). When a narrower case definition excluding these two types of cases was considered, weighted sensitivity increased to 0.45 (CI = 0.31--0.59), and weighted PVP was 0.94 (CI = 0.85--1.00). As a result of these findings, NEISS-CADES coders are now provided a streamlined flow sheet to identify ADEs and training specifically focused on identifying unintentional overdoses of diabetes agents and anticoagulants.

Reported by: T Nelson, Div of Hazard and Injury Data Systems, Consumer Product Safety Commission. DS Budnitz, MD, KN Weidenbach, MPH, Div of Injury and Disability Outcomes and Programs; SR Kegler, PhD, Office of Statistics and Programming, National Center for Injury Prevention and Control; DA Pollock, MD, Div of Healthcare Quality Promotion, National Center for Infectious Diseases; AB Mendelsohn, PhD, EIS Officer, CDC.

Editorial Note:

The goal of this evaluation was to assess and improve the usefulness of NEISS-CADES as an ongoing system to provide national estimates of ADEs. Evaluation of new surveillance systems such as NEISS-CADES is a challenging but important task for appropriately interpreting and applying public health surveillance data. If the hospitals in this investigation are representative of other NEISS-CADES hospitals, the PVP of 0.92 indicates that the ADE cases reported in NEISS-CADES generally represent actual cases. The low proportion of cases initially identified that were attributed to overdoses of insulin and anticoagulants suggests that national estimates of these events are likely to be lower than the actual number and highlights areas on which to focus interventions. After the implementation of interventions, reevaluation of sensitivity and PVP will be needed to help further improve sensitivity.

The sensitivity of coder case identification reported in this investigation might appear low (0.33 overall; 0.45 if two specific types of ADEs are excluded); however, this result should be considered in the context of other available surveillance data. The most commonly used national surveillance system for ADEs, the FDA Adverse Event Reporting System (AERS), is a passive surveillance system estimated to capture 1%--38% of serious adverse drug reactions and influenced by such factors as length of time the drug has been on the market and media attention (5). In addition, AERS was designed to capture newly recognized, unlabeled, adverse events and not designed to capture common ADEs from errors or overprescribing of older drugs, which likely contribute to the greatest public health burden (6). The National Hospital Ambulatory Medical Care Survey (NHAMCS) (7) has been used to describe outpatient adverse reactions, and the Drug Abuse Warning Network (DAWN) recently modified data-collection procedures to include adverse reactions (8); similar assessment of these systems might be appropriate.

The findings in this report are subject to at least three limitations. First, this evaluation was limited to review of available ED patient charts from a sample of days in six of the 64 NEISS-CADES hospitals. These hospitals were chosen as a convenience sample stratified by ADE reporting and size; therefore, although the characteristics of the ADE cases reported are similar to those from other hospitals (9), the estimates of sensitivity and PVP might not apply to other hospitals. Second, identification of ADEs by chart review has lower sensitivity for some types of ADEs when compared with other methods, such as screening computer-generated laboratory signals (10); however, chart review remains the most feasible method of national surveillance. Finally, surveillance of outpatient ADEs based on ED data does not capture ADEs that were not diagnosed and documented by the treating physician, ADEs diagnosed during subsequent hospitalizations, or ADEs treated elsewhere.

Since publication of the Institute of Medicine report, To Err Is Human: Building a Safer Health System, in 1999, considerable attention has been focused on the public health problem of medical injuries and ADEs, especially ADEs that occur in hospitalized patients. However, at least in part because of limited data, the potentially more common problem of ADEs in nonhospitalized persons has not been as fully explored. Nationally representative surveillance data that is both timely and detailed is needed to characterize the public health burden of outpatient ADEs and to help target prevention strategies. NEISS-CADES will continue as a resource for providing ongoing ADE surveillance, and this evaluation will assist in interpretation and use of these public health data.


This report is based, in part, on data contributed by six National Electronic Injury Surveillance System hospitals. T Schroeder, MS, C Irish, R Colucci, Div of Hazard and Injury Data Systems, Consumer Product Safety Commission. R Wagner, R Sattin, Div of Injury and Disability Outcomes and Programs, National Center for Injury Prevention and Control, CDC.


  1. General Accounting Office. Adverse drug events: the magnitude of health risk is uncertain because of limited incidence data. Washington, DC: General Accounting Office; 2000. Available at
  2. US Consumer Product Safety Commission. National Electronic Injury Surveillance System -- All Injury Program sample design and implementation. Washington, DC: US Consumer Product Safety Commission; 2002.
  3. Cohen JA. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 1960;20:37--46.
  4. CDC. Updated guidelines for evaluating public health surveillance systems: recommendations from the guidelines working group. MMWR 2001;50(No. RR-13):17--20.
  5. Trontell AE. How the US Food and Drug Administration defines and detects adverse drug events. Curr Ther Res Clin Exp 2001;62:641--9.
  6. Centers for Education and Therapeutics (CERTs) Risk Assessment Workshop participants. Risk assessment of drugs, biologicals and therapeutic devices: present and future issues. Pharmacoepidemiol Drug Saf 2003;12:653--62.
  7. Burt CW. Emergency health care encounters for adverse effects of medical care. Managed Care Interface 2001;14:39--42.
  8. Substance Abuse and Mental Health Services Administration, Office of Applied Studies. Drug Abuse Warning Network, 2003: interim national estimates of drug-related emergency department visits. Rockville, MD: Substance Abuse and Mental Health Services Administration, 2004. DAWN Series D-26. DHHS publication no. (SMA) 04-3972. Available at
  9. Budnitz DS, Pollock D, Mendelsohn AB, et al. Emergency department visits for outpatient adverse drug events: demonstration for a national surveillance system. Ann Emerg Med 2005;45:197--206.
  10. Field TS, Gurwitz JH, Harrold LR, et al. Strategies for detecting adverse drug events among older persons in the ambulatory setting. J Am Med Inform Assoc 2004;11:492--8.

* Hospital size was defined by number of ED visits per year. Very large hospitals had >41,131 visits per year; large hospitals had 28,151--41,130 visits per year; medium hospitals had 16,831--28,150 visits per year; and small hospitals had <16,830 visits per year.

A large metropolitan area was defined as a metropolitan statistical area (MSA) with >250,000 population in 2003; a small metropolitan area was defined as an MSA with <250,000 population in 2003; and a rural area was defined as outside of any MSA.


Table 1
Return to top.

Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.

References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of pages found at these sites. URL addresses listed in MMWR were current as of the date of publication.

Disclaimer   All MMWR HTML versions of articles are electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML version. Users should not rely on this HTML document, but are referred to the electronic PDF version and/or the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices.

**Questions or messages regarding errors in formatting should be addressed to

Date last reviewed: 4/21/2005


Safer, Healthier People

Morbidity and Mortality Weekly Report
Centers for Disease Control and Prevention
1600 Clifton Rd, MailStop E-90, Atlanta, GA 30333, U.S.A


Department of Health
and Human Services