Appendix A: Statistical Considerations
This section provides general guidance regarding epidemiologic and descriptive statistical methods most commonly used to assess occurrences of cancer. Frequencies, proportions, rates, and other descriptive statistics are useful first steps in evaluating the suspected unusual pattern of cancer. These statistics can be calculated by geographical location (e.g., census tracts) and by demographic variables such as age category, race, ethnicity, and sex. Comparisons can then be made across different stratifications using statistical summaries such as ratios.
Standardized Incidence Ratio
The standardized incidence ratio (SIR) is often used to assess whether there is an excess number of cancer cases, considering what is “expected” to occur within an area over time given existing knowledge of the type of cancer and the local population at risk. The SIR is a ratio of the number of observed cancer cases in the study population compared to the number that would be expected if the study population experienced the same cancer rates as a selected reference population. Typically, the state as a whole is used as a reference population. The equation is as follows:
Equation for the standardized incidence ratio, abbreviated as S. I. R., is the number of observed cancer cases, represented by the letter O, divided by the expected number of cancer cases, represented by the letter E, which is the number of cancer cases that would be expected if the study area experienced the same cancer rates as a selected reference population.
Adjusting for Factors
The SIR can be adjusted for factors such as age, sex, race, or ethnicity, but it is most commonly adjusted for differences in age between two populations. In cancer analyses, adjusting for age is important because age is a risk factor for many cancers, and the population in an area of interest could be, on average, younger or older than the reference population (43,44). In these instances, comparing the crude counts or rates would present a biased comparison.
For more guidance, this measure is explained in many epidemiologic textbooks, sometimes under standardized mortality ratio, which uses the same method but measures mortality instead of incidence rates (29,30,45–49). Two ways are generally used to adjust via standardization, an indirect and a direct method. An example of one method is shown below, but a discussion of other methods is provided in several epidemiologic textbooks (29) and reference manuals (50).
An example is provided in the table below, adjusting for age groups. The second column, denoted with an “O,” is the observed number of cases in the area of interest, which in this example is a particular county within the state. The third column shows the population totals for each age group within the county of interest, designated as “A.” The state age-specific cancer rates are shown in the fourth column, denoted as “B.” To get the expected number of cases in the fifth column, A and B must be multiplied for each row. The total observed cases and the total expected cases are then summarized.
|Age Group||Observed Number of Cases in County*
|County of Interest Population
|State Age-Specific Cancer Rate†
|Expected Cancer Cases
(A x B= E)
*Number of cases in a specified time frame.
† Number of cases in the state divided by the state population for the specified time frame. Rates are typically expressed per 100,000 or 1,000,000 population.
The number of observed cancer cases can then be compared to the expected. The SIR is calculated using the formula below.
Equation for the standardized incidence ratio, abbreviated as S. I. R., is the number of observed cancer cases, represented by the letter O, divided by the expected number of cancer cases, represented by the letter E., which is the number of cancer cases that would be expected if the study area experienced the same cancer rates as a selected reference population.
Example calculation of the standardized incidence ratio, abbreviated as S. I. R., with 650 observed cancer cases divided by 656 expected cancer cases, giving a SIR of 0.99.
A confidence interval (CI) is one of the most important statistics to be calculated, as it helps to provide understanding of both statistical significance and precision of the estimate. The narrower the confidence interval, the more precise the estimate (30).
A common way of calculating confidence intervals for the SIR is shown below (30):
Equation for calculating a 95% confidence interval for a standardized incidence ratio. The numerator is the square of the square root of the observed number of cancer cases plus or minus 1.96 divided by 2; the denominator is the expected number of cancer cases.
Using the example above produces this result:
Example calculation of a 95% confidence interval for a standardized incidence ratio with 650 observed cancer cases and 656 expected cancer cases, giving a lower 95% confidence limit of 0.92 and an upper 95% confidence limit of 1.07. The equation is the square root of 650 plus 1.96 divided by 2, squared, divided by 656, which equals the upper confidence interval of 1.07. The square root of 650 minus 1.96 divided by 2, squared, divided by 656, which equals the lower confidence interval of 0.92.
If the confidence interval for the SIR includes 1.0, the SIR is not considered statistically significant. However, there are many considerations when using the SIR. Because the statistics can be impacted by small case counts, or the proportion of the population within an area of interest, and other factors, the significance of the SIR should not be used as the sole metric to determine further assessment in the investigation of unusual patterns of cancer. Additionally, in instances of a small sample, exact statistical methods, which are directly calculated from data probabilities such as a chi-square or Fisher’s exact test, can be considered. These calculations can be performed using software such as R, Microsoft Excel, SAS, and STATA (49). A few additional topics regarding the SIR are summarized below.
Decisions about the reference population should be made prior to calculating the SIR. The reference population used for the SIR could be people in the surrounding census tracts, other counties in the state, or the entire state. Selecting the appropriate reference population is dependent upon the hypothesis being tested and should be large enough to provide relatively stable reference rates. One issue to consider is the size of the study population relative to the reference population. If the study population is small relative to the overall state population, including the study population in the reference population calculation will not yield substantially different results. However, excluding the study population from the reference population may reduce bias. If the reference population is smaller than the state as a whole (such as another county), the reference population should be “similar” to the study population in terms of factors that could be confounders (like age distribution, socioeconomic status and environmental exposures other than the exposure of interest). However, the reference population should not be selected to be similar to the study population in terms of the exposure of interest. Appropriate comparisons may also better address issues of environmental justice and health equity. Ultimately, careful consideration of the refence population is necessary since the choice can impact appropriate interpretation of findings and can introduce biases resulting in a decrease in estimate precision.
Limitations and Further Considerations for the SIR
One difficulty in community cancer investigations is that the population under study is generally a community or part of a community, leading to a relatively small number of individuals comprising the total population (e.g., small denominator for rate calculations). Small denominators frequently yield wide confidence intervals, meaning that estimates like the SIR may be imprecise (45). Other methods, such as qualitative analyses or geospatial/spatial statistics methods, can provide further examination of the cancer and area of concern to better discern associations. Further epidemiologic studies may help calculate other statistics, such as logistic regression or Poisson regression. These methods are described in Appendix B. Other resources can provide additional guidance on use of p-values, confidence intervals, and statistical tests (29,30,49,51,52).
Alpha, Beta, and Statistical Power
Another important consideration in community cancer investigations is the types of errors that can occur during hypothesis testing and the related alpha, beta, and statistical power for the investigation. A type I error occurs when the null hypothesis (Ho) is rejected but actually true (e.g., concluding that there is a difference in cancer rates between the study population and the reference population when there is actually no difference). The probability of a type I error is often referred to as alpha or α (53).
α = Probability(reject Ho| Ho is true)
Equation for alpha, i.e., the probability of rejecting the null hypothesis when the null hypothesis is true. Also known as the Type I error rate or false positive rate.
A type II error occurs when the null hypothesis is not rejected and it should have been (e.g., concluding that there is no difference in cancer rates when there actually is a difference). The probability of a type II error is often referred to as beta or β.
β = Probability(do not reject Ho | Ho is false)
Equation for beta, i.e., the probability of not rejecting the null hypothesis when the null hypothesis is false. Also known as the Type II error rate, or false negative rate.
Power is the probability of rejecting the null hypothesis when the null hypothesis is actually false (e.g., concluding there is a difference in cancer rates between the study population and reference population when there actually is a difference). Power is equal to 1-beta. Power is related to the sample size of the study—the larger the sample size, the larger the power. Power is also related to several other factors including the following:
- The size of the effect (e.g., rate ratio or rate difference) to be detected
- The probability of incorrectly rejecting the null hypothesis (alpha)
- Other features related to the study design, such as the distribution and variability of the outcome measure
As with other epidemiologic analyses, in community cancer investigations, a power analysis can be conducted to estimate the minimum number of people (sample size) needed in a study for detection of an effect (e.g., rate ratio or rate difference) of a given size with a specified level of power (1-beta) and a specified probability of rejecting the null hypothesis when the null hypothesis is true (alpha), given an assumed distribution for the outcome. Typically, a power value of 0.8 (equivalent to a beta value of 0.2) and an alpha value of 0.05 are used. An alpha value of 0.05 corresponds to a 95% confidence interval. Selection of an alpha value larger than 0.05 (e.g., 0.10: 90% confidence interval) can increase the possibility of concluding that there is a difference when there is actually no difference (Type I error). Selection of a smaller alpha value (e.g., 0.01: 99% confidence interval) can decrease the possibility of that risk and is sometimes considered when many SIRs are computed. The rationale for doing this is that one would expect to see some statistically significant apparent associations just by chance. As the number of SIRs examined increases, the number of SIRs that will be statistically significant by chance alone also increases (if alpha is 0.05, then 5% of the results are expected to be statistically significant by chance alone). However, one may consider this fact when interpreting results, rather than using a lower alpha value (54). Decreasing the alpha value used will also decrease power for detection of differences between the population of interest and the reference population.
In many investigations of suspected unusual patterns of cancer, the number of people in the study population is determined by factors that may prevent the selection of a sample size sufficient to detect statistically significant differences. In these situations, a power analysis can be used to estimate the power of the study for detecting a difference in rates of a given magnitude. This information can be used to decide if or what type of statistical analysis is appropriate. Therefore, the results of a power calculation can be informative regarding how best to move forward.
Additional Contributing Authors:
Andrea Winquist, Angela Werner