CDC Home

# Appendix A: Technical Notes

### How to Interpret a Confidence Interval

What is a confidence interval?
Simply speaking, confidence intervals are a useful way to consider margin of error, a statistic often used in voter polls to indicate the range within which a value is likely to be correct (e.g., 30% of the voters favor a particular candidate with a margin of error of plus or minus 3.5%). Similarly, in this report, confidence intervals are used to provide a range that we can be quite confident contains the success rate for a particular clinic during a particular time.

Why do we need to consider confidence intervals if we already know the exact success rates for each clinic in 2010?
No success rate or statistic is absolute. Suppose a clinic performed 100 cycles among women younger than 35 in 2010 and had a success rate of 20% with a confidence interval of 12%–28%. The 20% success rate tells us that the average chance of success for women younger than 35 treated at this clinic in 2010 was 20%. How likely is it that the clinic could repeat this performance? For example, if the same clinic performed another 100 cycles under similar clinical conditions on women with similar characteristics, would the success rate again be 20%? The confidence interval tells us that the success rate would likely fall between 12% and 28%.

Why does the size of the confidence interval vary for different clinics?
The size of the confidence interval gives us a realistic sense of how secure we feel about the success rate. If the clinic had performed only 20 cycles instead of 100 among women younger than age 35 and still had a 20% success rate (4 successes out of 20 cycles), the confidence interval would be much larger (between 3% and 37%) because the success or failure of each individual cycle would be more significant. For example, if just one more cycle had resulted in a live birth, the success rate would have been substantially higher — 25%, or 5 successes out of 20 cycles. Likewise, if just one more cycle had not been successful, the success rate would have been substantially lower — 15%, or 3 out of 20 cycles. Compare this scenario to the original example of the clinic that performed 100 cycles and had a 20% success rate. If just one more cycle had resulted in a live birth, the success rate would have changed only slightly, from 20% to 21%, and if one more cycle had not been successful, the success rate would have fallen to only 19%. Thus, our confidence in a 20% success rate depends on how many cycles were performed.

Why should confidence intervals be considered when success rates from different clinics are being compared?
Confidence intervals should be considered because success rates can be misleading. For example, if Clinic A performs 20 cycles in a year and 8 cycles result in a live birth, its live birth rate would be 40%. If Clinic B performs 600 cycles and 180 result in a live birth, the percentage of cycles that resulted in a live birth would be 30%. We might be tempted to say that Clinic A has a better success rate than Clinic B. However, because Clinic A performed few cycles, its success rate would have a wide 95% confidence interval of 18.5%–61.5%. On the other hand, because Clinic B performed a large number of cycles, its success rate would have a relatively narrow confidence interval of 26.2%–33.8%. Thus, Clinic A could have a rate as low as 18.5% and Clinic B could have a rate as high as 33.8% if each clinic repeated its treatment with similar patients under similar clinical conditions. Moreover, Clinic B’s rate is much more likely to be reliable because the size of its confidence interval is much smaller than Clinic A’s.

Even though one clinic’s success rate may appear higher than another’s based on the confidence intervals, these confidence intervals are only one indication that the success rate may be better. Other factors also must be considered when comparing rates from two clinics. For example, some clinics see more than the average number of patients with difficult infertility problems, whereas others discourage patients with a low probability of success. For more information see important factors to consider when using the tables to assess a clinic.

### Validation Visits for 2010 ART Data

Site visits to assisted reproductive technology (ART) clinics for validation of 2010 ART data were conducted during April through June 2012. For validation of 2010 data, 35 of the 443 reporting clinics were randomly selected after taking into consideration the number of ART procedures performed at each clinic, some cycle and clinic characteristics and whether the clinic had been selected before. During each validation visit, ART data reported by the clinic to the Centers for Disease Control and Prevention were compared with information documented in medical records.

For each clinic, the fully validated sample included up to 40 ART cycles resulting in pregnancy and up to 20 ART cycles not resulting in pregnancy. In total, 2,070 ART cycles performed in 2010 across the 35 clinics were randomly selected for full validation, along with 135 embryo banking cycles. The full validation included review of 1,352 cycles for which a pregnancy was reported, of which 446 were multiple-fetus pregnancies. In addition, among patients whose cycles were validated, we verified the number of ART cycles performed during 2010. For each of these patients, we compared the total number of ART cycles reported with the total number of ART cycles included in the medical record. If unreported cycles were identified in selected medical records, up to 10 of these cycles were also selected for partial validation.

Findings and discrepancy rates from the 2010 validation visits will be available later this year.