Radiographic Classification: Contested Proceedings
The setting of contested proceedings presents special challenges to obtaining accurate chest radiograph classifications. The polarized interests of conflicting parties create a situation where diligence and special care are needed to ensure that classifications are accurate. As in other settings, it is important to remember that chest radiograph findings alone are insufficient for the diagnosis of pneumoconiosis. Other data, such as the medical and occupational history, the physical examination, additional types of chest imaging, various laboratory tests, and biopsy results should also be considered, as available. It should be noted that the presence of a diagnosis does not necessarily imply functional impairment. The American Medical Association’s Guides to the Evaluation of Permanent Impairment, Sixth Edition provides useful guidelines for assessing the presence and severity of impairment.
The International Labour Office (ILO) recognizes the limitations of using the ILO Classification System to make decisions for awarding compensation. The 2011 ILO Classification Guidelines state explicitly that classification "does not imply legal definitions of pneumoconiosis for compensation purposes and does not set or imply a level at which compensation is payable" (ILO 2011). Despite these cautions, ILO classifications that fit certain definitions of abnormality are frequently considered in decisions concerning compensation awards. Parenchymal abnormalities, in particular small opacity profusion classifications of 1/0 or greater, are frequently considered to be consistent with pneumoconiosis in compensation proceedings. Pleural abnormalities can also be used to document the presence of adverse outcomes to occupational dust exposure. Use of standardized ILO classifications in contested proceedings helps to assure that chest radiographs are evaluated in way that is fair, consistent, and reproducible across geography and time.
The setting of contested proceedings presents special challenges to obtaining accurate chest radiograph classifications. The environment is often adversarial. Unfortunately, the competing desires for a favorable outcome by the contending parties can result in pressure for classifications biased in opposite directions. There are various ways for bias to occur. Classifications of chest radiographs made with knowledge of whether the classification is for a plaintiff or defendant, or with knowledge of individual or group data on exposures or health status, can lead to results that favor reporting the presence or absence of abnormality. In addition, selection of readers with known or suspected high or low classification tendencies, payment based on outcome, and lack of quality assurance are all factors that can result in bias.
Owing to the pressures involved in contested proceedings, diligence and special care is needed to ensure that classifications are not biased. However, acquisition of reliable classifications is possible, while at the same time ensuring that the process is fair to all parties.
NIOSH has prepared some ethical guidelines that should be considered when readers classify radiographs in contested settings. The American Medical Association (AMA) and the American College of Radiology (ACR) have published guidelines for physicians serving as expert witnesses (ACR 2007, AMA H-265.994, AMA E-9.07). All of them discuss the need to be impartial, objective, and unbiased. Testimony must be scientifically valid and be able to withstand peer review.
Use of the ILO Classification System provides an accepted means of standardizing disease assessment – a necessary condition for ensuring fairness and equity.
Remuneration that is based on individual classification outcomes or on the overall level of reported abnormality has the obvious potential to cause bias.
To maintain quality and avoid bias it is necessary that readers have a high level of knowledge and skills relating to the ILO classification and pneumoconioses (e.g., B Readers). Reader selection founded on known or suspected reading tendencies will obviously lead to bias. To avoid such bias, it is best that readers be selected randomly from the largest pool of available B Readers. Precise documentation of the reader selection procedures for all classifications is necessary to permit assessment of the reader selection methodology.
To avoid any implication of bias, it is necessary to specify from the outset the number of readers that will be used. The process of undertaking serial classifications until one(s) are obtained that suit a particular viewpoint is clearly inappropriate. Rather, for attainment of reliable radiograph classification, a minimum of two independent classifications by readers selected at the outset is advisable, with a third required if a certain level of disagreement is encountered, as described below. In order to derive fair and consistent summary classifications from the individual independent classifications, it is necessary to specify the summarization procedures beforehand:
- Small opacities: Summarization algorithms must recognize the need to maximize the reliability of the final determination around the legal threshold of abnormality. For instance, in situations where the legal threshold of abnormality is category 1/0, the following algorithm might be appropriate: When the first two independent classifications both indicate 1/0 or greater profusion or both indicate 0/1 or lower, take the higher of the two profusions as the final summary classification. Otherwise, if one classification is 0/1 or lower and the other is 1/0 or greater, obtain a third independent classification and take the median of the three as the final summary classification.
- Large opacities: Summarization algorithms must recognize the need to maximize the reliability of the final determination around the legal threshold of abnormality. For instance, in situations where the legal threshold of abnormality is presence of large opacities, the following algorithm might be appropriate: When the first two independent classifications of a radiograph both identify large opacities, take the higher of the two large opacity categories as the final summary classification. When only one of the first two classifications identifies a large opacity and the other identifies coalescence of small opacities (symbol “ax”), take the category of large opacity as the summary classification. Otherwise, obtain a third independent classification, if not already done, and take the median of the three large opacity categories as the summary classification.
- Pleural abnormalities: Summarization algorithms must recognize the need to maximize the reliability of the final determination around the legal threshold of abnormality. For instance, in situations where the legal threshold of abnormality is presence of pleural abnormalities, the following algorithm might be appropriate: When two or more independent classifications of a radiograph find the presence of pleural abnormalities with any agreement on side (left or right) and location (diaphragm, face on, profile, or other site), take the final summary classification to be presence of pleural abnormality on the side(s) and location(s) where there was agreement as the summary classification.
- Other abnormalities (“obligatory symbols”): Include each obligatory symbol recorded in two or more classifications in the final summary classification. When both large opacities and the symbol “ax” are reported by any reader, include “ax” in the summary classification.
- Film quality: When a reader classifies a radiograph as unreadable, a further classification by a reader selected from the pool of available readers is appropriate. To provide a comprehensive summary indication of the quality of the radiograph, it is necessary to take all assessments into account. The average of all quality scores from each independent classification (with “unreadable” [U/R] scored as 4) provides an overall index reflecting the extent of the reliability of the summary classification for the radiograph.
When classifying radiographs it is necessary that the reader does not consider any other information about the individuals being studied, including medical data, exposure information, the context and consequences of the classification, or other readers’ interpretations. Awareness of supplementary details specific to individuals, the group, or situation can introduce bias into results.
The need for accurate, unbiased classification lies at the core of classification in contested proceedings. Standardized and carefully documented quality assurance procedures are advisable, especially for entities involved in obtaining many radiograph classifications per year (e.g., classifications of 100 chest radiographs or more). It is best that readers know that quality assurance procedures are being implemented, as this alone is a motivation to accurate classification. Concurrent quality assurance, using unidentified radiographs representing known (i.e., previously classified using expert readers) positive and negative stages of the disease abnormality under consideration, provides the optimal approach to ensuring quality. The resulting information on possible over- or under-classification tendencies can be used by the entity for which the radiographs are being classified in several ways. These range from simply informing readers of their own classification levels to removing specific readers from the pool based on significant and documented evidence.
Whenever medical findings are pertinent to maintaining and protecting health, it is ethically necessary to inform examined individuals of findings from their individual chest radiograph, including all information from the individual and summary classifications. Documentation of efforts to notify individuals is advisable. Medical follow-up should be recommended where appropriate. To further disease identification and to promote prevention, reporting of diagnosed or suspected cases of pneumoconiosis to state public health organizations is required in some states.
American College of Radiology. Practice Guideline on the Expert Witness in Radiology. Revised 2007 (Res. 40). Effective 10/1/07
American Medical Association. H-265.994, Expert Witness Testimony. (Sub. Res. 223, A-92; Appended: Sub. Res. 211, I-97; Reaffirmation A-99)
American Medical Association. Opinion 9.07 - Medical Testimony
Sheers G, Rossiter CE, Gilson JC, et al. UK naval dockyards asbestos study: radiological methods in the surveillance of workers exposed to asbestos. Br J Ind Med 1978; 35:195-203.
Fay JWJ, Rae S. The Pneumoconiosis Field Research of the National Coal Board. Ann Occup Hyg 1959; 1:149-61.
Hurley JF, Burns J, Copland L, et al. Coalworkers’ simple pneumoconiosis and exposure to dust at 10 British coalmines. Br J Ind Med 1982; 39:120-7.
- National Institute for Occupational Safety and Health (NIOSH)
- Centers for Disease Control and Prevention
TTY: (888) 232-6348
- New Hours of Operation
- Contact CDC-INFO