National TB Indicators Project: Initial Indicators and Performance Targets
Selecting potential objectives: The Workgroup collected a wide range of potential objectives from a variety of sources, including current GPRA and HP2010 goals, California’s Tuberculosis Indicators Project objectives, Aggregate Reports for Tuberculosis Program Evaluation (ARPE) indices, and DTBE Cooperative Agreement goals. The Workgroup decided to focus on only those objectives for which there were reliable data already collected. We recognized that there were many other important program objectives that would be valuable in guiding performance evaluation, but without reliable data available, such objectives would not allow us to measure current performance or future improvement. In total, we considered 28 objectives.
Prioritizing objectives: The Workgroup members cast individual votes for the objectives they felt were most helpful as indicators of good program performance. Each member ranked their top 10 objectives in priority order, with their top pick being given a score of 10 and their 10th pick being given a score of 1. All votes were summed, and the 28 objectives were listed in priority order, from highest to lowest. Based on this prioritization, the Workgroup decided on 16 broad indicators and 24 objectives. Of these, the Workgroup focused on the four highest priority indicators and their 13 corresponding objectives for initial target setting.
Setting performance targets: To set performance targets, we first collected baseline data for each of the 13 highest priority objectives. We collected baseline information for the previous 5 years, when it was available. In most cases, this baseline information represented annual data from 2000, 2001, 2002, 2003 and 2004. In the case of objective 1, (“increase timely completion of treatment”), surveillance data lags 2 years behind case reporting, and the 5 years from 1998 to 2002 were used. For objectives based on the Aggregate Reports for Tuberculosis Program Evaluation (ARPE), data is available only for the 4 years from 2000 to 2003. For the objective related to culture identification, data was available only for 2003 and 2004.
Once baseline data were collected, we applied an exponential line fitting equation to forecast future values based on the baseline data and the assumption that all TB control program performance would remain constant (i.e., programs would not get better or more efficient nor would they get worse or less efficient). We used the Excel function Growth(), which applies the formula y=m^x to determine future values (“y”), based on the slope (“m”) of the baseline data points and the future year (“x”). We called this forecast of future national performance the “U.S. forecast.”
We considered two general approaches to setting performance targets. The first approach was to set future performance targets based on a percentage improvement over our U.S. forecast. Say, for arguments sake, that we decided to set our performance target as 20% better than our forecast. If our forecast indicated that the nation was improving at rate of 2% per year on completion of therapy, our new target would be to improve at a rate of 2.4% per year. This approach has the advantage of being simple to understand and to calculate, and we could decide how ambitious to make these targets by simply varying the percentage improvement we decided on. The biggest disadvantage to this first approach, and the ultimate reason this approach was rejected, is that whatever percentage improvement we decided on would be arbitrary, and we had no way to know a priori if a particular percentage improvement was realistic.
The second approach we considered, and finally adopted, was to base our future performance target on the actual performance of a good-performing state. Say, for arguments sake, that we decide that we will select as our good-performing state the one that represents the 90th percentile. In other words, if we ordered all 51 reporting sites (the 50 states plus the District of Columbia) from the best to the worst by their completion of therapy results, then counted down 5 reporting sites, we would see which site was the 90th percentile performer. In fact, this process would reveal that the 90th percentile site reported completing therapy within 12 months in 93% of their patients in 2002 (the latest year for which completion of therapy data are available). According to this approach, we would set 93% as the national performance target for some future year – we selected the year 2015 as a reasonable target date. This approach has to be modified slightly to accommodate objectives that are based on declines in the rates of TB. In the case of rates, we had to calculate the yearly decline in rates for each reporting site, then select the 90th percentile performer. The yearly decline of the 90th percentile state for the current year then became performance target for the nation. To be more technically accurate, the exponential slope of the baseline values reported by the 90th percentile state was used to calculate the expected performance target for the year 2015. Although more difficult to calculate, this approach had the advantage of basing our expectations of future improvement on the actual performance of a specific state. The Workgroup decided to apply this second approach and decided to select the state whose performance represented the 90th percentile of all the states reporting data.
On a technical note, for certain objectives, it was difficult to determine the rate of improvement of states with low occurrences of the event under observation. In order to be certain that our performance targets were not skewed by low-occurrence states, we did a detailed sensitivity analyses, which basically asked the question, what is the impact of dropping data from reporting sites that reported <10 cases, <20 cases, .... up to <100 cases? In many instances, we were able to show that we could keep in all data without affecting the results. In the case of US-born blacks, we chose to drop states reporting <50 cases in 2004, since this is what the Cooperative Agreements stipulate for the measurement of this objective. For the others, we used the highest number of states possible that gave a stable target (i.e., the 90th percentile target did not change when we dropped reporting sites with 10 fewer cases). We made another exception in the case of rates in children <5 years of age. In this case, many states reported no cases in 2004 and many others reported fewer than 5 cases, so for the determination of the 90th percentile target, we excluded those states that reported fewer than 5 cases in 2004.