Justify Conclusions Practical Use of Program Evaluation among Sexually Transmitted Disease (STD) Programs STEP 1: ENGAGE STAKEHOLDERS 1.1 Determine how and to what extent to involve stakeholders in program evaluation STEP 2: DESCRIBE THE PROGRAM 2.1 Understand your program focus and priority areas 2.2 Develop your program goals and measurable (SMART) objectives 2.3 Identify the elements of your program and get familiar with logic models 2.4 Develop logic models to link program activities with outcomes STEP 3: FOCUS THE EVALUATION 3.1 Tailor the evaluation to your program and stakeholders’ needs 3.2 Determine resources and personnel available for your evaluation 3.3 Develop and prioritize evaluation questions STEP 4: GATHER CREDIBLE EVIDENCE 4.1 Choose appropriate and reliable indicators to answer your evaluation questions 4.2 Determine the data sources and methods to measure indicators 4.3 Establish a clear procedure to collect evaluation information 4.4 Complete an evaluation plan based on program description and evaluation design . STEP 5: JUSTIFY CONCLUSIONS 5.1 Analyze the evaluation data 5.2 Determine what the evaluation findings “say” about your program STEP 6: ENSURE USE OF EVALUATION FINDINGS AND SHARE LESSONS LEARNED 6.1 Share with stakeholders the results and lessons learned from the evaluation 6.2 Use evaluation findings to modify, strengthen, and improve your program SUGGESTED CITATION: Salabarría­Peña, Y, Apt, B.S., Walsh, C.M. Practical Use of Program Evaluation among Sexually Transmitted Disease (STD) Programs, Atlanta (GA): Centers for Disease Control and Prevention; 2007. Justify Conclusions T he conclusions of your evaluation are guided by the evaluation questions, the SMART objectives pertaining to the activity being evaluated, the valid and reliable data sources and methods you have employed, the findings/evidence you have gathered, and the input from the stakeholders. The stakeholders must agree that these evaluation conclusions are justified by the findings before they actually use the results with confidence. Justifying the evaluation conclusions involves analyzing and synthesizing the evaluation findings so you can have a better understanding of the program activity/component you are evaluating. It also involves the process of figuring out what the findings mean for your STD program and providing recommendations for its improvement. Step 5 will help you analyze your evaluation data and determine what the evaluation results ‘say’ about your program. Step 5 is divided into two evaluation tools. • Tool 5.1 provides guidance on how to manage your data, create a data analysis plan and analyze and present both quantitative and qualitative data. • Tool 5.2 provides information on how to interpret what your analyzed data ‘say’ about your program in the context of your evaluation objectives as well as your stakeholders’ interests. TOOL 5.1: ANALYZE THE EVALUATION DATA INTRODUCTION In Step 4, you learned how to choose appropriate and reliable indicators for your evaluation questions, determine data sources and methods to measure your indicators, and establish clear procedures to collect the data. Tool 5.1 describes how to manage and analyze the data you collect so that you can answer your evaluation questions and respond to your indicators. Please note that this tool touches on general concepts of data analysis and additional help may be needed to do qualitative and/or quantitative data analysis. This tool provides you with references for further information of different aspects of data analysis. The flowchart below illustrates where data analysis fits in with your evaluation activities. LEARNING OBJECTIVE After completing this tool, you will be able to: • Decide how to analyze and synthesize your evaluation data. HOW DO YOU ANALYZE EVALUATION DATA? Data analysis is the process of organizing, classifying, tabulating, and examining the information you collected and presenting the results so they can be easily understood by your stakeholders. These tasks fall under the three broad steps of (1) developing a data analysis plan, (2) managing your data, and (3) conducting data analysis. 1. Modify your evaluation by adding a data analysis component. In Tool 4.4, you learned how to develop an evaluation plan. The analysis of data should be included in this evaluation plan.1 Specifically, you should develop your analysis plan prior to collecting your data so you can anticipate the skills, resources, and materials needed for data analysis which will help you maintain a systematic approach. Table 1 is an example of how you might modify your evaluation plan to include the analysis of your data in the plan. The following describes the different components of the analysis piece. 1a. Determine analysis per indicator. To best answer your evaluation questions, determine what data analysis you will perform for each indicator (see the evaluation plan you developed using Tool 4.4). 1b. Determine if quantitative data analysis needs to be performed. Quantitative data are numbers, which you may have collected via surveys, logs, attendance records, or other methods. Additional quantitative data could include information pertaining to disease rates (e.g. the rate of chlamydia among adolescent females in JDCs). In quantitative analysis, you may summarize the data by computing totals, percentages, and averages (means). Making inferences of your data, when appropriate, might be the next step.2 1 Please refer to Tool 4.4 for a comprehensive discussion of the creation of an evaluation plan. 2 For more information on quantitative data analysis techniques please refer to Creswell, J. (2003). Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage. PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS 241 1c. Determine if qualitative data analysis needs to be performed. Qualitative data are in the form of text rather than numbers. You may have gathered these data via focus groups, in­depth interviews, observations, or other open­ended inquiry methods. Qualitative data are often in the form of lengthy narratives or field notes. To analyze these data you need to review the information and identify common themes. For example, in analyzing responses from a focus group of medical providers, patients and STD program staff you might look for common themes among the list of perceived barriers to implementing patient­delivered partner therapy. Once themes are identified, they can be categorized in different ways such as grouping supports and barriers according to the data sources (e.g., patients, medical providers, program staff)3. 1d. For each analysis procedure determine when this needs to be completed and by whom. 2. Manage your data. Developing a system to manage your evaluation data ensures uniform data handling. If you are using secondary data, such as the Behavioral Risk Factor Surveillance System or Youth Risk Behavior Surveillance System, the data have been checked, entered into a database and tabulated by those conducting the survey. If you are collecting data with your own instrument, consider the following sub­steps to manage these data. 2a. Determine data management responsibilities. You will need to designate one or more individuals to manage the data. For example, a person(s) should be responsible for managing the data after it has been collected. This may include coding, storage, retrieval, and distribution of data for data entry and analysis. You should also ensure that this person(s) has obtained the appropriate training to conduct these tasks. 2b. Transfer/transcribe your data. For quantitative data, if the data collection forms are complex, you may want to transfer the data to new forms, like an answer sheet, that have been 3 For more information on qualitative data analysis techniques, please refer to Patton, M. Q. (1997). Utilization­focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage. PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS 242 designed to make it easier for data to be entered into database. For qualitative data, if field notes were taken by hand or information was tape recorded, it is important to transcribe that information with a word processing software. 2c. Code your data, if needed. Apply the codes (code scheme) you started developing when designing the instrument(s) (see Tool 3.4) and modify, if necessary (i.e., add new codes or modify codes). For example, if you have collected quantitative information with a survey you will probably want to code the information so that it can be entered into a database to be tabulated and analyzed. If one of the variables is “sex” you might code this as “1” for female and “2” for male. In qualitative analysis, you will apply the code scheme to text segments that match the theme(s) associated with the code. For example, if you are analyzing data from in­depth interviews with medical staff at juvenile detention centers (JDC) to identify barriers and facilitators of implementing Ct screening, you might identify the following themes: priority of Ct screening, training, and JDC infrastructure. As you examine the data, sections of the interview that refer to those themes will be coded as such. 2d. Modify the code book. Finalize/apply the code book to increase the accuracy of coding your data and of conducting data entry, which will facilitate the analysis of these data. 2e. Consider computer software. Consider which computer software may be needed to enter and analyze your data (see the Reference section of this tool for examples of quantitative and qualitative computer software). 2f. Revise your data for completeness and accuracy. If you have quantitative data, this process is often called “cleaning”, which is conducted to assure that the data are free from incorrect or missing entries. Consider the following steps when cleaning data: • Verify that the data file has the correct and expected number of participants. For example, if data were collected from 100 participants and the total number of records entered is more than 100, check to see if any record is repeated. If total records are fewer than 100, check to see if any participant’s data are missing. • Check for any erroneous codes and inconsistent responses in the data file. For example, if you are assigning gender codes with the options of 1=male and 2=female, and a gender code of 4 has been entered, you obviously have an error. To clean this record, recheck the original survey and reenter the reported value (1 or 2). Consider the following steps when dealing with qualitative data: • Assess whether text is legible and recordings are audible. Make sure that those who will be analyzing the data will be able to read the field notes or transcriptions easily. If tapes from interviews are being transcribed, make sure that they are audible. If these conditions are not met, you will need to consult the interviewer or interviewee to retrieve the correct information. • Assess the quality of open­ended interviews/focus groups/ observations. When using qualitative methods, you can start this process at the beginning of data collection. For instance, you can have an experienced interviewer review transcriptions so that her/his input can be used to improve subsequent interviews/focus groups/observations during the same evaluation. Make sure that the responses you are getting answer the evaluation questions. If not, revise the instrument for subsequent data collection. Decisions can be made accordingly to improve the quality of the data (e.g., implementing probing techniques, paraphrasing, adding/deleting questions from interview guide) and/or to improve the process for a future evaluation. • Conduct peer review. Have a peer (another staff member) review the information you have collected for accuracy. 2g. Monitor data entry. To ensure accuracy you will need to monitor data entry. For example, once the data are entered using the established codes or values, you may want to have a second person compare the data that has been entered with the criteria outlined in the codebook. 2h. Review your data management system. Before implementing a system to manage the data, review it (or have a colleague review it) in order to identify potential problems. 3. Analyze your data. Analyzing quantitative and qualitative data involves the following procedures. Quantitative Data Analysis • If needed, identify quantitative/qualitative software for analysis4. • Tabulate the data to provide information for each indicator. • Analyze and, if appropriate, stratify the data by demographic variables of interest (e.g., participants’ race, sex, age, income level, geographic location). • Make comparisons, if appropriate, to describe the groups participating in the evaluation. • Look at your analyzed data over time. See how your results change by tracking your indicators. If results are not changing in the desired direction, this can alert you to take a closer look at your program and work with your stakeholders to improve your approach. It may be that the disease has shifted to another population, or that the intervention is appropriate, but the level of effort is insufficient to produce change. Qualitative Data Analysis Whether you decide to analyze your qualitative data manually or through a software analysis program5, you will need to categorize responses by themes and assign a code to segments of text. First, read transcripts of the qualitative data you plan to analyze (e.g., interviews, focus groups) and begin by identifying similar ideas and/or patterns. Print the transcripts, and mark relevant responses. If analyzing the data manually, sort the material by cutting up the paper copies, affixing these pieces to index cards (remember to identify the source of each piece) and placing them in identified categories. 4 Examples of quantitative analysis software packages are listed in the reference section of this document. 5 Examples of qualitative analysis software packages are listed in the reference section of this document. PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS 245 Table 1: Evaluation Plan Matrix6 DATA COLLECTION ANALYSIS EVALUATIONQUESTION INDICATOR DATASOURCE(S) METHOD COLLECTIONTIME RESPONSIBLEPERSON(S) ANALYSISPROCEDURE ANALYSISTIMELINE RESPONSIBLEPERSON • Develop andprioritize eachevaluation questionwith input fromstakeholders.• Link eachevaluation questionwith the goals andobjectives of yourprogram and thepurpose of yourevaluation.• Verify that thequestions reflectthe key elementsof your programlogic model thatyour evaluationwill address, andthat they can beaddressed usingthe resourcesavailable at hand,i.e. budget, staff,and staff time.• Verify that thefindings generatedfrom the questionswill be useful • For eachevaluationquestion, createan indicator toreflectachievement(e.g. number ofstaff trained,percent of healthclinicsimplementing apolicy, etc.).• If needed, youmay use morethan oneindicator perquestion. • For eachevaluationquestion, indicatewhat source(s)has theinformation toanswer thequestion (i.e.,individuals,observations, ordocuments).• A single datasource canprovideinformation onmore than oneevaluationquestion. • For each datasource, identifyand list the bestmethod to gatherdata (e.g., use oftelephone ratherthan mail surveyfor a certaintarget group).• More than onedata collectionmethod may beused to gatherinformation fromone datasource.). • For each datacollectionmethod, identifyroles andresponsibilities ofALL individualsinvolved. This willallow you toestimate theworkload of eachindividual,maintain asuitable staffinglevel, anddevelop areasonabletimeline for datacollection. • For each datacollectionmethod, fill inthe datacollectionschedule (e.g.,baseline surveydata will becollected bymm/dd/yy;follow­up surveydata will becollected bymm/dd/yy). • Decide how toanalyze the dataper indicator.• If you areanalyzingquantitativedata, you mightwant to considergeneratingfrequencies,averages andpercentage.• If you areanalyzingqualitative datacode the text bythemes. • For each analysis(both quantitativeand qualitative)that is conducted,fill in the analysistimeline schedule(e.g., frequenciesof survey data willbe completed bymm/dd/yy,identification ofthemes from focusgroups will becompleted bymm/dd/yy). • For each analysisprocedure,identify roles andresponsibilities ofALL individualsinvolved. This willallow you toestimate theworkload of eachindividual,maintain asuitable staffinglevel, anddevelop areasonabletimeline for theanalysis of yourdata. 6The 1st half of the evaluation plan matrix shaded in grey is discussed in Tool 4.4. You may also use qualitative analysis software where you can develop a code scheme and apply it to “chunks” of text corresponding to the coding themes. Organizing your material in this way will help you identify patterns and determine what the data mean. If your evaluation uses multiple methods, you will need to analyze the data, identify important findings, and then combine (synthesize) the information to reach a more comprehensive understanding. SUMMARY CHECKLIST: Analyze the Evaluation Data CONCLUSION AND NEXT STEPS This tool outlined the process for managing and analyzing the data you collect so that you can answer your evaluation questions and determine the progress toward your outcomes and outputs (i.e., indicators). Specifically, this tool provided tips on data management, how to develop a data analysis plan, and on quantitative and qualitative data analyses. In Tool 5.2, you will learn how to interpret evaluation findings and measure the success of the program component or activity being evaluated. ACRONYMS USED IN THIS TOOL Ct – Chlamydia JDC – Juvenile detention center STD – Sexually transmitted disease KEY TERMS Code book: A document detailing instructions on how the data for a specific evaluation is coded. It describes each code so that codes are applied to the data in a standardized way. Coding: In quantitative analysis this is the process of arranging the data so that the computer can “read” the code and perform an analysis (e.g., if one of the variables is “sex” you might code this as 1 for “female” and 2 for “male”). In qualitative analysis, coding is used to reduce the data by organizing the text (data) into categories/themes. The codes are applied to text segments that match the theme(s) associated with the code. Data cleaning: The process of reviewing the data and preparing it for analysis by correcting erroneous data entry. Data management: The control of data handling operations­­such as acquisition, analysis, translation, coding, storage, retrieval, and distribution of data. Evaluation plan: A document that includes what an evaluation consists of (i.e., purpose/uses/users of the evaluation, program goals and objectives related with the evaluation, logic model, evaluations questions and design, data collection sources and methods, and dissemination plan) and the procedures that will help guide the implementation of evaluation activities to be undertaken by your program. Indicator: A specific, observable, and measurable accomplishment or change that shows whether progress has been made toward achieving a specific output or outcome. CASE EXAMPLE The following example shows how an STD program evaluation team analyzed qualitative data from several in­depth interviews and focus groups with juvenile detention center (JDC) medical staff members in order to identify barriers and facilitators to implementing Chlamydia (Ct) screening of female adolescents in JDCs. The example also includes how the team analyzed JDC monthly reports to determine changes in the number of JDCs conducting Ct screening of female adolescents. Analyzing Qualitative Data Evaluation team members conducted the following steps in analyzing their qualitative data. 1. The team reviewed open–ended responses to interviews and standardized observation forms developed for this task to document the environment in which screening was being conducted. They identified whether the text was legible and audiotapes (in the case of interviews) were audible. 2. Using the evaluation question being addressed (“What were the barriers and facilitators of implementing Ct screening in JDCs?”), the team organized the text into two major themes (i.e., barriers and facilitators). 3. Next they reviewed the text within these two themes and identified categories of information (training, JDC infrastructure). See Table 2 for a sample of qualitative data analysis. Analyzing Quantitative Data Evaluation team members analyzed quantitative data from JDC reports to answer two evaluation questions: (1) As a result of the Ct screening initiative, did more JDCs screen adolescent females for Ct? (2) As a result of the Ct screening initiative, were more adolescent females in JDCs screened for Ct? For each indicator, the team computed percentages, as shown in Table 3. Table 2: Main themes from focus groups and in­depth interviews of JDC medical staff THEMES JD C STAFF RESPONSES IDENTIFIED FACILITATORS BARRIERS Training • After the training on Ct screening clinical staff reported feeling more confident in implementing the Ct clinical guidelines. • The training delivered in the professional development workshop was a good starting point for discussions about how to implement protocols for Ct screening. • Due to high turnover, there is new clinical staff who have not gone through the training on Ct screening in JDCs. • The workshop facilitators did not adapt the Ct screening protocols to the JDC medical unit environment. JDC Infrastructure • JDC medical units are fully equipped with exam rooms and other materials to carry out Ct screening. • Due to understaffing of the medical unit in several JDCs, staff do not have adequate time to screen all eligible admittees. REFERENCES Centers for Disease Control and Prevention. (n.d.). Oral health infrastructure development tools, Step 2B: Logic models. Retrieved October 17, 2004, from http://www.cdc.gov/oralhealth/library /pdf/logic_models.pdf MacDonald, G., Starr, G., Schooley, M., Yee, S. L., Klimowski, K., & Turner, K. (2001, November). Introduction to program evaluation for comprehensive tobacco control programs. Atlanta, GA: Centers for Disease Control and Prevention. Retrieved October 17, 2004, from http://www.cdc.gov/tobacco/ evaluation_manual/Evaluation.pdf W.K. Kellogg Foundation. (1998, January). W.K. Kellogg Foundation evaluation handbook. Retrieved October 17, 2004 from http://www.wkkf.org/Programming/ResourceOverview.aspx? CID=281&ID=770 QUALITATIVE DATA ENTRY AND ANALYSIS SOFTWARE RESOURCES • To organize themes and codes for qualitative data analysis, you can use Microsoft Word (http://office.microsoft.com/en­us/FX010857991033.aspx) • In addition, several software packages can help you with qualitative data analysis. Examples include: – AnSWR (http://www.cdc.gov/hiv/software/answr.htm), free CDC software which helps to coordinate and conduct large­scale, team­based analysis projects that integrate qualitative and quantitative techniques; – EZ­Text (http://www.cdc.gov/hiv/software/ez­text.htm ), another free CDC software package which helps to create, manage, and analyze semi­structured qualitative databases; – Commercially­available qualitative data analysis packages such as NUDIST (http://www.ucd.ie/computing/support/application/nudist.ht ml), or Atlas­ti (http://www.atlasti.com/). QUANTITATIVE DATA ENTRY AND ANALYSIS SOFTWARE RESOURCES • Basic quantitative data entry and analysis may be done using Microsoft Excel or Access http://office.microsoft.com/en­us/FX010858001033.aspx http://office.microsoft.com/en­us/FX010857911033.aspx • Quantitative software designed to run inferential statistical analysis includes: – SPSS (http://www.spss.com/vertical_markets/education /index.htm?source=homepage&hpzone=verts) – SAS (http://www.sas.com/software/) – STATA (http://www.stata.com/) Table 3: Data Results for Outcome Evaluation Questions OUTCOME EVALUATION QUESTION OUTCOME INDICATOR DATA RESULTS As a result of the Ct screening initiative, did more JDCs screen adolescent females for • Percentage of JDCs that screened adolescent females for Ct 6 months before the professional development workshop. • Before project: 2/5 (40%) of JDCs screened adolescent females for Ct. Ct? • Percentage of JDCs that screened adolescent females for Ct 6 months after the professional development workshop. • After project: 5/5 (100%) JDCs screened adolescent females for Ct As a result of the Ct screening initiative, were more adolescent females in JDCs screened for Ct? • Percentage of female adolescents screened for Ct 6 months before the screening initiative. • Screening before project: 200/2500 (8%) adolescent females screened. • Percentage of female adolescents screened for Ct by JDC facility 6 months before the screening initiative. • Adolescent females screened before project: JDC1: 3/500 JDC4: 27/500 JDC2: 50/500 JDC5: 20/500 JDC3: 100/500 • Percentage of female adolescents screened for Ct 6 months after the screening initiative. • Screening after project: 1000/2500 (40%) adolescent females screened. • Percentage of female adolescents screened for Ct by JDC facility 6 months after the screening initiative. • Adolescent females screened after project: JDC1: 200/500 JDC4: 75/500 JDC2: 250/500 JDC5: 185/500 JDC3: 350/500 TOOL 5.2: DETERMINE WHAT THE EVALUATION FINDINGS “SAY” ABOUT YOUR PROGRAM INTRODUCTION At this point in your evaluation planning process, your program staff and stakeholders will have analyzed the evaluation data. In this tool, you will learn how to interpret the findings generated by your data analyses. The flowchart below provides a description of where data interpretation fits within your evaluation activities. LEARNING OBJECTIVES After completing this tool, you should be able to: • Examine evaluation results to determine what they say about the program. • Use evaluation results to measure program success. WHAT IS THE PURPOSE OF DATA INTERPRETATION? You need to interpret the evaluation results to arrive at the findings about the evaluation questions you formulated. Sound data interpretation should help you identify the factors that facilitate and inhibit the achievement of program objectives. For example, reporting that you fell short of training XX% of the STD clinical staff should be supplemented with information on why the performance target was not achieved. It may be that all scheduled trainings could not be implemented or that scheduled participants could not attend the training for various reasons. It is recommended that you meet with your stakeholders to interpret the results because they may have a different perspective on what was observed and an explanation for it. WHAT DO YOU NEED TO CONSIDER WHEN INTERPRETING YOUR EVALUATION RESULTS? Consider the following steps when interpreting your evaluation results. 1. Organize your evaluation findings. Match your data with the purpose of the evaluation, the evaluation questions and corresponding indicators you developed when planning the evaluation. This is shown in the example below. Background: Over the past year, Project Area has reported a low number of sexual contacts initiated for gonorrhea cases (i.e., <1 sexual contact initiated per patient interviewed) among adolescents. Program management decided to intensify efforts to increase the number of contacts identified and found. An evaluation was conducted to determine the reason for the low number of sexual contact initiated and to make changes accordingly. Purpose of the evaluation: Determine why the project area is reporting a low number of sexual contacts initiated in order to take corrective action. Sample Evaluation Questions: (When you develop your evaluation questions, you will probably have more than these three evaluation questions.) • Are the 3 disease intervention specialists (DIS) following standard protocols for eliciting sexual contacts from gonorrhea­infected individuals? • Are all contacts being recorded appropriately? • What other factors contribute to the low number of initiated sexual contacts? Indicators: • Number of DIS who follow the elicitation protocols all the time. • Barriers identified by DIS pertaining to the elicitation process. • Barriers identified by DIS in following the protocol for recording sexual contacts. • Barriers and facilitators identified by DIS and their supervisor(s) in eliciting sexual contacts of gonorrhea cases. Findings: • The evaluator observed that frequently many DIS did not follow elicitation protocols completely when conducting interviews. • According to DIS supervisors who were interviewed: (1) they were having to spend an increasing amount of time on administrative paperwork, and did not have sufficient time for observing and mentoring DIS, (2) there was a high staff turnover, and (3) although DIS staff had interviewing experience, all were relatively new (4 months) to the STD program and this job. • DIS staff were also interviewed regarding their comfort level in eliciting information from cases, training opportunities, barriers in identifying sexual contacts, and support from their supervisor and program management. In many instances it was found that it took several visits to identify contacts and due to the case load of each DIS, it took longer than expected to follow up with each identified case. The number of cases assigned to each DIS was more than each could complete in a timely manner. In addition, many of the gonorrhea cases were adolescents, and the interviews were considered to be particularly challenging for those three DIS. DIS indicated that they would like to learn about ways to gather more information about adolescents’ sexual contacts, their sexual venues, and how to discuss risk prevention and treatment with this population. Interpretation: • DIS are relatively new to the job and need more training on the implementation of elicitation protocols and interviewing skills, particularly when working with adolescents. They also need to receive more mentoring and guidance from their supervisors who are often bogged down with administrative duties. 2. Consider issues of context when interpreting the results. Data often do not explain why the findings are what they are. Information obtained from the evaluation need to be interpreted based on larger contextual issues. For example, Project Area evaluation findings indicate that the objective of distributing STD prevention materials to all STD clinics in the state within 6 months of initiating the Chlamydia prevention campaign was not achieved. The distribution of material varied across geographic areas. That is, in some areas of the state, all STD clinics received the information; in others none of the clinics reported receiving it; and still in other areas material distribution was patchy. A discussion with the STD program staff indicated that the main factor contributing to the variation in material distribution was the difference in program staff’s delivery approach. That is, program staff either personally distributed the material to the clinic directors, left the material without informing the clinic director, or they mailed the material to the clinic directors. Based on this finding, the program staff decided to use a consistent approach to delivering the material to all STD clinics. Program staff were instructed to personally deliver the material to the clinic directors and obtain a confirmation for its receipt. If the evaluation had not included interviews with STD program staff, the reasons for not achieving the objective would not have been determined, thus making it difficult to take appropriate corrective action. 3. Determine the practical significance of what has been learned. Always ask the question “so what?” when interpreting your evaluation findings. This is important because one of the purposes of conducting program evaluation is to improve programs. Therefore, the evaluation results should be used to modify aspects of the program, strengthen current activities, or change what may not be working. If the program objectives are not met, you need to determine the resulting consequences (e.g., target population is not being reached, etc.) and review your logic model links to help you understand why the activities that were planned and undertaken did not lead to the expected result, or why the activities were not implemented as planned. Likewise, you need to determine the significance of meeting the objectives (e.g., reduction of disease transmission rates). For example, Project Area has developed and implemented a reactor grid for syphilis which has been used for the past two years. A comparison of the reactor grid with the characteristics of the actual syphilis case indicated a mismatch. An evaluation was undertaken to understand reasons for this mismatch. The evaluation findings indicated that in the past year, a number of early syphilis cases in older white males were identified in clinic volunteers and contacts of other cases. By checking the characteristics of cases against those listed in the reactor grids, the program staff identified that mismatches occurred in older white males over 35 years of age. The grid included directions to follow up on white males below 35 years of age with a reactive test, but not those over 35 years. Based on the evaluation findings, the age field on the reactor grid was revised to include males below 55 years of age with a reactive test result. The significance of not changing the reactor grid is that follow­up would not have been done on cases that could lead to further transmission (this represents the “so what”). 4. Discuss what is working well and what is not. Identifying and reporting the strengths and weakness of the program provide an opportunity to highlight and strengthen factors that affect its success. There should be a balance between what is working and what is not, since both can be used to strengthen and improve program activities. Examples of strengths identified through program evaluation include achievement of the program objectives and delivery of program activities as planned. 5. Discuss the limitations of the evaluation. When interpreting your results, acknowledge the limitations of the evaluation including the limitations of the evaluation design and the data collection methods. For example, one limitation may be the inability to include some questions due to the need for keeping the data collection instrument brief. 6. Synthesize the findings. You will have looked at various indicators and interpreted several findings generated as a result of the analyses. The final step in data interpretation is to link all the findings to the evaluation questions and to tell a story. This should succinctly, yet comprehensively, highlight what the findings indicate about the program component/ activity evaluated. Clearly articulated findings are the basis for developing recommendations for program improvement. SUMMARY CHECKLIST: Determine what the Evaluation Findings Say about Your Program • Organize your evaluation findings. • Consider issues of context when interpreting the results. • Determine the practical significance of what has been learned. • Discuss what is working well and what is not. • Discuss the limitations of the evaluation. • Synthesize the evaluation findings. CONCLUSION AND NEXT STEPS In Tool 5.2, you learned about issues to consider when interpreting your evaluation findings. In Tool 6.1, you will learn how to formulate recommendations based on the conclusions you reach when interpreting the results. You will also determine strategies for informing audiences about relevant aspects of the evaluation, and you will learn how to organize and present the evaluation findings. ACRONYMS USED IN THIS TOOL DIS – Disease Intervention Specialist SMART– Specific, Measurable, Achievable, Relevant, and Time­bound STD – Sexually transmitted disease KEY TERMS Data Interpretation: The process of determining the meaning or significance of evaluation findings to your program. Goal: A broad statement related to the purpose of your program that states what your program will accomplish (the desired result). Purpose of evaluation: General intent of the evaluation (e.g., to fine­tune program operations). Stakeholders: Individuals or organizations directly or indirectly affected by your STD program and/or the evaluation results (e.g., STD program staff, family planning staff, representatives of target populations). CASE SCENARIO The following case scenario serves as an example of how an STD program interpreted its evaluation data using the steps outlined in this tool. The interpretation provided here is by no means complete. PURPOSE OF EVALUATION: To determine whether standard protocols of serologic testing of pregnant women for syphilis in prenatal clinics of City X are being consistently implemented. EVALUATION QUESTIONS: • Are clinicians in prenatal clinics following standard protocols for ordering the appropriate serologic test for syphilis for pregnant women during the first prenatal visit? • What were the major facilitators and barriers to ordering the serologic tests for syphilis during the first prenatal visit? INDICATORS: • Percentage of pregnant women’s medical records that include order and results of syphilis testing at the first prenatal visit. • Facilitators identified by physicians to ordering the serologic tests for syphilis during pregnancy. • Barriers identified by physicians to ordering the serologic tests for syphilis during pregnancy. FINDINGS: • Based on medical chart review in prenatal clinics, only XX% of pregnant women received syphilis tests at the first prenatal visit and only XX% of the clinicians ordered serologic tests among pregnant women at first prenatal visit. • Based on the medical chart review, program staff noted that medical charts from some clinics did not include a check­off box for ordering the syphilis test. In medical charts from some other clinics, the results were recorded on the last page of multiple chart pages and were not easily accessible. • Clinic physicians reported that they followed protocols for syphilis testing in pregnant women; however, they admitted there was a potential to overlook ordering the test when other required tests had a check­off box, but syphilis testing did not. • Clinic physicians reported that it was sometimes difficult to locate the results of the syphilis test in the organization of medical records. INTERPRET YOUR EVALUATION FINDINGS: Organize your evaluation findings. See section on evaluation findings above to see how the findings match the evaluation purpose, questions, and indicators. Consider issues of context when interpreting the results. In clinics where the medical record did not have a check­off box, it is possible that the physicians ordered the serologic tests for syphilis but did not record it on the medical chart. Record abstractors would need to check the medical record to verify the presence of the laboratory test slip and the results of the test. Determine the practical significance of what has been learned. Failure to test pregnant women could result in a failure to identify pregnant women with syphilis, thus resulting in babies with congenital syphilis. The program staff and stakeholders agreed that the information on ordering serologic tests should be captured on all medical charts. All the medical charts should be revised to include this information. Furthermore, the form should be redone to capture syphilis test ordering and test results in the same section. Discuss what is working well and what is not. In clinics where the medical records had a check­off box, clinicians were more likely to order the tests; however, the current format of the medical charts is not working well and should be revised. Discuss the limitations of the evaluation. It was not possible to interview all key clinical staff at the different clinics due to time constraints among them. Therefore, additional barriers and facilitators may have been missed. Synthesize the evaluation findings. In summary, results indicated that clinical staff in prenatal clinics were not consistently implementing protocols of serologic testing of pregnant women during their first prenatal visit. Based on the observation that medical charts at some prenatal clinics do not have a check­off box for ordering syphilis tests, physicians were less likely to document tests, and the rates of ordering serologic tests were less than expected. The layout of the medical chart does not allow the physician to readily view the results of the syphilis tests as well as to know quickly if the patient had a test or not. REFERENCES Bond, S. L., Boyd, S. E.,, & Rapp, K. A. (with Raphael J. B., & Sizemore B. A.) (1997). Taking stock: A practical guide to evaluating your own programs. Retrieved October 17, 2004, from http://www.horizon­research.com/reports/1997/stock.pdf Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. MMWR Recommendations and Reports, 48(RR­11). Retrieved October 17, 2004, from http://www.cdc.gov/mmwr/preview/mmwrhtml/rr4811a1.htm Centers for Disease Control and Prevention. (n.d.). Program operations: Guidelines for STD prevention. Retrieved October 17, 2004 from http://www.cdc.gov/std/program/ MacDonald, G., Starr, G., Schooley, M., Yee, S. L., Klimowski, K., & Turner, K. (2001, November). Introduction to program evaluation for comprehensive tobacco control programs. Atlanta: Centers for Disease Control and Prevention. Retrieved October 17, 2004, from http://www.cdc.gov/tobacco/evaluation_manual/ Evaluation.pdf McKenzie, J. F., Smelter J. L. (1997). Planning, implementing, and evaluating health promotion programs: A primer (2nd edition). New York: Macmillan. Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A systematic approach (6th ed.). Thousand Oaks, CA: Sage. W.K. Kellogg Foundation. (1998, January). W.K. Kellogg Foundation evaluation handbook. Retrieved October 17, 2004, from http://www.wkkf.org/Programming/ResourceOverview.aspx? CID=281&ID=770