Proceedings of 52nd Session of the International Statistical Institute, Helsinki, Finland, 1999. Helsinki, Finland: Statistics Finland, 2001 Aug; :329-341
The United States Office of Management and Budget's Federal Committee on Statistical Methodology (FCSM) has had a leadership role in discussions of the methodology of federal surveys for more than two decades (Gonzales, 1995). In 1996, the FCSM established a subcommittee to review the measurement and reporting of data quality in federal data collection programs. Although data quality is a multidimensional concept that includes issues of accuracy, relevance, timeliness, and accessibility, the subcommittee has focussed its discussions on the issue of accuracy, its measurement and presentation. The sources of error that affect survey data quality - sampling error, coverage error, nonresponse error, measurement error, and processing error and their measurement are described in a number of texts. Kasprzyk and Kalton (1999, 1997) provide a summary of methods used to measure error sources and examples of their implementation in U.S. data collection programs. This paper focuses on another area of the subcommittee's interests - the reporting and presentation of information on sources of error in several dissemination media (short-format publications, analytic publications, and the Internet). This review is based on the work of the FCSM Subcommittee on Data Quality (McMillen and Brady, 1999; Atkinson, Schwanz, and Sieber, 1999; Giesbrecht, Miller, Moriarity, and Ware-Martin, 1999).Users of survey data need information about a survey's quality to properly assess survey results. Standards adopted by many statistical agencies specify users should be informed of survey quality. There are many dimensions to survey quality and the measurement and presentation of this information is no easy task. Report formats, dissemination media, agency policies and practices vary. The studies reviewed in this paper illustrate the range of agency practices in reporting information on error sources in surveys. The studies suggest that U.S. statistical agencies not merely define policy and standards in the reporting of such information, but monitor the implementation of this policy. Some observers argue a common template for information about error sources in data collection programs would help raise awareness of this need. Others suggest that ongoing data collection programs should develop quality profiles, a report that systematically gathers information about survey procedures and sources of error. In any case, data users are best served when information about survey procedures and sources of error are readily available to them to help the interpretation of the analysis. In the case of the U.S. experience, more emphasis and interest in this topic is desirable.