Department of Health and Human Services Centers for disease control and prevention Acknowledgements A manual of this nature is labor intensive and draws from many sources. We thank individuals from the Academy of Educational Development (AED) who assisted us in the development and testing phases of this manual: Stacey Little, PhD, Susan Rogers, PhD and Richard Sawyer, PhD We also thank the following people from the Division of STD Prevention for their assistance in reviewing initial drafts of this document: Lydia Blasini­Alcivar, PhD, David Byrum, MPH, Dayne Collins, BS, Janelle Dixon, Norm Fikes, Deymon Fleming, MPH, Kim Seechuk, MPH and Steve Shapiro, BS. The Division of STD Prevention extends special thanks to those individuals in the field of STD prevention who contributed with their time to providing either input on the document layout/content or reviewing the relevance of the different chapters in this document to STD programs. These individuals are: • Gail Bolan, MD (California) • Susan Bulecsa, MSN (Florida) • F. Bruce Coles, DO (New York) • Derek Coppedge, MBA (Kansas) • Nyla DeArmitt, BS (Missouri) • Annabeth Elliott, RN (Idaho) • Alice Gendelman, MPH (California) • Heather Hauck, MSW (New Hampshire) • Heidi Jenkins, BS (Connecticut) • Robert Johnson, GCPH (Virginia) • Kristine Judd, BSPH (Michigan) • Laurie Kops, BS (Montana) • Leandro Mena, MD (Mississippi) • Pete Moore, MPH (North Carolina) • David Morgan, BS (South Dakota) • Glen Olthoff (Baltimore City) • Rupa Sharma, MSc (Arkansas) • Mark Stenger, MA (Washington) • Melanie Taylor, MD (Arizona) • Drew Thomits, BS (New Hampshire) • Craig Thompson, BS (Mississippi) • Wendy Wolf, MPA (San Francisco) To better meet the needs of our funded­partners, we pilot­tested all the sections of this document for usability and practicality in real life STD programs. Four funded­STD programs (California, Idaho, Michigan and North Carolina) went through an intensive step­by­step evaluation skill­building process and were successful in applying the content of this document to a programmatic area of their choice. We thank the following individuals for their valuable input and hard work: • Michael McElroy, MPH (California) • Susan Watson, MPH (California) • Annabeth Elliott, RN (Idaho) • Kristine Judd, BSPH (Michigan) • Bruce Nowak, BS (Michigan) • Monica Brown, MPH (North Carolina) • Lumbe Davis, MPH (North Carolina) • Kawanna Glenn, BS (North Carolina) • Monica Melvin, BS (North Carolina) • Chantha Prak, BS (North Carolina) Practical Use of Program Evaluation among Sexually Transmitted Disease (STD) Programs January 2007 Yamir Salabarría­Peña, Dr.P.H., M.P.H.E. Betty S. Apt, BA Cathleen M. Walsh, Dr.P.H. Department of Health and Human Services Centers for Disease Control and Prevention National Center for HIV, STD, and TB Prevention Division of STD Prevention SUGGESTED CITATION: Salabarría­Peña, Y, Apt, B.S., Walsh, C.M. Practical Use of Program Evaluation among Sexually Transmitted Disease (STD) Programs, Atlanta (GA): Centers for Disease Control and Prevention; 2007. Should you need help or more information regarding this manual or any specific tool, please contact CDC/DSTDP program evaluation staff at (404) 639­8276 or eval@cdc.gov. INTRODUCTION TO THE MANUAL 1 OVERVIEW OF PROGRAM EVALUATION 5 STEP 1: ENGAGE STAKEHOLDERS 13 1.1 Determine how and to what extent to involve stakeholders in program evaluation STEP 2: DESCRIBE THE PROGRAM 33 2.1 Understand your program focus and priority areas 2.2 Develop your program goals and measurable (SMART) objectives 2.3 Identify the elements of your program and get familiar with logic models 2.4 Develop logic models to link program activities with outcomes STEP 3: FOCUS THE EVALUATION 119 3.1 Tailor the evaluation to your program and stakeholders’ needs 3.2 Determine resources and personnel available for your evaluation 3.3. Develop and prioritize evaluation questions STEP 4: GATHER CREDIBLE EVIDENCE 173 4.1 Choose appropriate and reliable indicators to answer your evaluation questions 4.2 Determine the data sources and methods to measure indicators 4.3 Establish a clear procedure to collect evaluation information 4.4 Complete an evaluation plan based on program description and evaluation design STEP 5: JUSTIFY CONCLUSIONS 239 5.1 Analyze the evaluation data 5.2 Determine what the evaluation findings “say” about your program STEP 6: ENSURE USE OF EVALUATION FINDINGS AND SHARE LESSONS LEARNED 263 6.1 Share with stakeholders the results and lessons learned from the evaluation 6.2 Use evaluation findings to modify, strengthen, and improve your program GLOSSARY AND APPENDICES 289 Glossary of Key Terms Appendix A: Evaluation Designs Appendix B: Syphilis Case Illustrating the Application of the Manual Appendix C: Sample Logic Models of STD Programs Appendix D: Sample Evaluation Plans of STD Programs Introduction to the Manual W elcome to this manual, which illustrates the “how to” of planning and implementing evaluation activities in STD­related programs in a user­friendly manner. WHY THE NEED FOR THIS MANUAL? In 2002, the National Coalition of STD Directors (NCSD) conducted a needs assessment of STD program infrastructure among their membership (representatives from STD programs in all 50 states, plus 7 cities, and 8 US territories). More than three quarters of the respondents reported a need for guidance from CDC to conduct STD program evaluation and more evaluation resources. In response to the expressed needs, CDC’s Division of STD Prevention (DSTDP) supported the development of this manual to illustrate the theory and practice of program evaluation tailored to STD programs, and workshops on the content of the manual. This process started in 2004 and was completed in 2006. WHAT IS THE PURPOSE OF THIS MANUAL? This manual provides guidance on how to design and implement a program evaluation via a step­by­step approach. Its goals are to (1) build the evaluation capacity of STD programs so that they can internally monitor their program activities, understand what is working or not working and improve their efforts; (2) establish a common evaluation language across project areas; (3) show that program evaluation is an activity we can all do and that yields important benefits; and (4) integrate evaluation into routine program practice. INTRODUCTION & OVERVIEW The manual is meant to be used by those in STD­related programs who are responsible for conducting evaluation activities and by those with lots, little or no experience, even in the face of limited resources for program evaluation. This manual presents program evaluation as an integral part of the program planning process. It provides a way of thinking about evaluation from the get­go as opposed to it being an afterthought or something that could wait until the end of a program. In addition, the manual focuses on an evaluation process that is participatory and will be responsive to the program’s needs, stakeholders, and resources. Since evaluating an entire STD program may not be feasible due to resource constraints, we emphasize the evaluation of program activities or components and focus on evaluations that can measure program contribution rather than attribution (causality). This manual touches on basic principles of program evaluations, but the information included is not exhaustive. For more detailed information on various evaluation topics, please make use of the references provided in the manual and in Tool 3.2, which includes links to other evaluation resources. HOW IS THIS MANUAL ORGANIZED? This manual is based on CDC’s Framework for Program Evaluation in Public Health and evaluation material developed by other Divisions at CDC (Division of Adolescent and School Health, Division of Tuberculosis Prevention, Office of Strategy and Innovation, and the Office of Smoking and Health), all of which have been based in CDC’s framework. Our framework divides evaluation into six progressive steps. • Step 1: Engage stakeholders. • Step 2: Describe the program. • Step 3: Focus the evaluation design. • Step 4: Gather credible evidence. • Step 5: Justify conclusions. • Step 6: Ensure use and share lessons learned. This manual includes an overall introduction to program evaluation, and then it is organized by main step. Each step is divided into sub­steps (e.g., 1.1, 2.1, etc.), and for each sub­step there is a PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS INTRODUCTION & OVERVIEW corresponding tool showing the “how to” of each evaluation activity. Each evaluation tool includes concepts and systematic step­by step guidance on how to go about conducting different evaluation activities as it applies to STD programs. The following illustrates how the steps/tools and concepts are organized. 1. Engage Stakeholders 1.1 Determine how and to what extent to involve stakeholders in program evaluation STEP AND TOOL • Who are stakeholders? • Why is stakeholder involvement important? • How to identify/involve/retain stakeholders in evaluation? CONCEPTS 2. Describe the Program 2.1 Understand your program focus and priority areas 2.2 Develop your program goals and measurable (SMART) objectives 2.3 Identify the elements of your program and become familiar with logic models 2.4 Develop logic models to link program activities with outcomes • Why is it important to determine needs? • What are the benefits of and when to conduct a needs assessment? • How to carry out a needs assessment. • What are goals and objectives? • How to write an objective the SMART way. • What are process and outcome objectives? • What are the elements of a program? • What is a logic model and its benefits? • How to use goals and objectives to develop a logic model. • What are the challenges and rewards of logic models? • What types of logic models can be constructed? • How to construct a logic model. 3. Focus the Evaluation Design 3.1 Tailor the evaluation to your program and stakeholders’ needs 3.2 Determine resources and personnel available for your evaluation 3.3. Develop and prioritize evaluation questions • What are process and outcome evaluations? • How to choose the focus of an evaluation. • When are the results of an evaluation needed? • Who will conduct the evaluation? (skills of a qualified evaluator) • What financial resources are available? (technical assistance services at DSTDP, how to recruit an evaluator, how to develop a budget) • What is the purpose of and how to develop/prioritize evaluation questions? 4. Gather Credible Evidence 4.1 Choose appropriate and reliable indicators to answer your evaluation questions 4.2 Determine the data sources and methods to measure indicators 4.3 Establish a clear procedure to collect evaluation information 4.4 Complete an evaluation plan based on program description and evaluation design • What is an indicator? • What is the link between indicators, performance measures and program evaluation? • How to develop appropriate indicators. • What data sources and data collection methods can be used? • What is the relationship between indicators, data sources and data collection methods? • What factors to consider when developing data collection procedures (developing instruments, data collectors skills, training data collectors). • What is the purpose of an evaluation plan and its components? • How to construct an evaluation plan. 5. Justify Conclusions 5.1 Analyze the evaluation data 5.2 Determine what the evaluation findings “say” about your program • How do you analyze evaluation data? (quantitative/qualitative) • What is the purpose of data interpretation and should be considered when doing so? 6.Ensure Use and Share Lessons Learned 6.1 Share the results and lessons learned from the evaluation with stakeholders 6.2 Use evaluation findings to modify, strengthen, and improve your program • What are some factors to consider when developing recommendations? • How to share evaluation results. • How to promote the use of evaluation results. • How to use evaluation findings for decision making. INTRODUCTION & OVERVIEW Each tool has the following components: • Introduction: briefly describes what the tool is about and its relationship to the previous sub­step/tool to provide continuity and progression. • Diagram: connects the concepts to be discussed in the tool with previous concepts addressed. • Learning Objectives: specifies what can be accomplished from going through the tool. • Content: divides information into subheadings (i.e., guiding questions) and is loaded with examples specific to STD to simplify concepts and connect with the reader. Focus on the “how to” of the topic. • Checklist: lists key aspects of the tool. • Conclusion: summarizes the tool content. • Next­steps: connects the tool with the next one. • Key terms: defines concepts addressed in the tool. • Acronyms: spells out acronyms used in the tool. • References: provides resources that were used to craft the tool and can help readers to locate more detailed information. • Exercise: provides hands­on experience on how to apply the tool content to STD programs (e.g., worksheets, exercises/questions with answer key, flow charts, matrices, etc.). Most of the tools include exercises or case studies. • Case study: describes how the concepts addressed in the tool have been/can be applied to a particular scenario. • Appendix: Adds to the information already discussed in the tool. • Answer key: provides possible responses to the exercise included. In addition to the introduction and the steps/tools, we have included a glossary and appendices at the end of the manual. In the appendices you will find a case on a syphilis project illustrating how one step/tool builds on another, information on evaluation designs, sample logic models and evaluation plans developed by the STD programs that pilot­tested this manual. WHAT’S NEXT? The following section introduces program evaluation, its uses, explains the difference between program evaluation and other program elements, and describes CDC’s framework for Program Evaluation. PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS Overview of Program Evaluation Y ou will see program evaluation used in multiple contexts. In this manual we are using the following definition which conveys the essence of program evaluation. “…the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program effectiveness, and/or inform decisions about future development.” —Michael Quinn Patton, Utilization Focused Evaluation, 1997 We conduct program evaluation according to a set of protocols that are systematic to accurately determine the value or merit of a program and make decisions accordingly, including those related to improving a program. Understanding by program as “a set of planned activities directed toward bringing about specified change(s) and identifiable audience”, we conclude that STD­related activities (e.g., counseling session with an individual with positive results for syphilis), interventions (e.g., gonorrhea media campaign in a defined population), and components (e.g., partner services) are deserving of program evaluation. Program evaluation is influenced by program constraints (e.g., staffing, budget, skills). Therefore, evaluation should be practical and feasible and must be conducted within the confines of resources, time, and political context. Moreover, it should serve a useful purpose, be conducted in an ethical manner, and produce accurate INTRODUCTION & OVERVIEW findings. Evaluation findings should be used to make decisions about program implementation and to improve program effectiveness. Program evaluation measures the “how” and the “why”, and it can be conducted to measure the evaluation issues outlined below. It is intended to document program progress, demonstrate accountability to funders and policymakers, or identify ways to make the program better. • Implementation: whether an intervention is implemented as planned or if a target population is reached; if not, why not? • Effectiveness: if a program is achieving its goals and objectives or its intended outcomes; if not, why not? • Efficiency: whether program activities are being conducted using resources (e.g., budget, time) appropriately. If not, why not? • Causal Attribution: whether progress on goals and objectives are due to the STD program activities, as opposed to other things that are going on at the same time; if not, why not? • Cost­Benefit analysis: identifies all relevant costs and benefits of a program (intervention, activity), usually expressed in terms of dollar. • Cost­effectiveness analysis: determines the cost of meeting a single goal or objective and can be used to identify the least costly alternative for meeting this. You were just presented with issues that you can address by using program evaluation. However, evaluation has limits. For instance, program evaluation cannot determine success of a “program” that has no goals and measurable objectives or a program theory to evaluate against. Program evaluation needs to be based on questions about a program that stakeholders are interested in answering. In the absence of such questions, merely collecting data does not equal evaluation. Last but not least, if results of an evaluation are not used for programmatic decision making, it is not program evaluation. WHY EVALUATE STD PROGRAMS? Program evaluation is vital in allowing STD prevention programs to determine their accomplishments, how resources have been invested, what should be improved, and take action accordingly. The demands of policymakers and other stakeholders for accountability and result­oriented programs have increased. This means that strong program evaluation is essential now more than ever. Ongoing evaluation is critical to developing and sustaining high­quality and appropriately PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS INTRODUCTION & OVERVIEW targeted STD prevention efforts. Program evaluation offers the opportunity to review, analyze, and modify STD prevention efforts as necessary. It also helps improve program performance and measure progress toward achievement of goals and objectives. WHAT IS THE DIFFERENCE BETWEEN PROGRAM EVALUATION AND OTHER PROGRAM ELEMENTS? Program evaluation is sometimes confused with other program essential components because it is one of several ways to answer questions about a program. These questions can be answered using other approaches that might be characterized as academic research, performance measurement, planning, and surveillance. Academic Research While there is some overlap between program evaluation and academic research, there are differences between the two that are important to distinguish in the context of evaluating STD­related activities or components. The main differences have to do with the purpose and how findings are used. The main purposes of program evaluation are to identify program’s achievements in meeting its goals and objectives, identify program operations/components/activities that need to be improved, and solving practical problems. The main purposes of academic research are to create new knowledge in a field that may be generalized to programs or populations throughout a field, and to test hypotheses. An evaluation might determine whether a specific outreach activity is reaching the right target population, whereas an academic research study might aim to find out how people’s attitudes influence their predisposition or likelihood to seek STD testing. An evaluation is conducive to making programmatic decisions (e.g., modifying the activity, allocating more resources, implementing the activity in different counties); research, in contrast, is conducive to examining the relationship between attitudes and testing, and trying to translate the findings into practical implications. Performance Measurement Performance measurement is an ongoing monitoring of a set of indicators (i.e., performance measures) of program progress for the purpose of accountability to interested parties such as funders, legislators, and the general public. Performance measures respond to INTRODUCTION & OVERVIEW “how are we doing?” Because of its ongoing nature, performance measurement can serve as an early warning to program managers of changes in program performance. Program evaluation can be used to answer “why is the program doing poorly or well?” and identify adjustments that may improve performance. Planning Program evaluation is part of the program planning continuum. Planning asks “What are we doing and what should we do to achieve our goals and objectives?” Program evaluation provides information on progress toward goals and objectives, identifies what is working well and/or poorly, and recommends what can be changed to help the program better meet its intended goals and objectives. Surveillance Surveillance is the continuous monitoring or routine data collection on various factors (e.g., behaviors, attitudes, deaths) over a regular time interval and responds to “what’s happening?” Program evaluation will provide more in­depth examination as to why something is happening in the implementation and contextual aspects of the program. Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially for monitoring long­term and population­based outcomes. However, some surveillance systems have limited flexibility when it comes to adding questions that a particular evaluation needs to answer. WHAT IS CDC’S FRAMEWORK FOR PROGRAM EVALUATION? CDC’s Framework for Program Evaluation in Public Health In 1999, CDC published a “Framework for Program Evaluation in Public Health” (CDC, 1999). The Framework, as depicted below, has two components: (1) six steps and (2) four sets of standards for conducting quality evaluations of public health programs. As previously mentioned, this framework is the foundation of this manual. The following is a graphic representation of the framework. The underlying logic of the Evaluation Framework is that a well­done evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference. The following is a brief overview of the framework. PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS INTRODUCTION & OVERVIEW CDC Framework for Program Evaluation Source: Centers for Disease Control and Prevention. Framework for Program Evaluation in Public Health. MMWR1999; 48 (No. RR­11). Program Evaluation Steps • Step 1: Engage Stakeholders deals with engaging individuals and organizations (with an interest in the program) in the evaluation process. • Step 2: Describe the Program involves describing the program or activity to evaluate by defining the problem, formulating program goals and objectives, and developing a logic model showing how the program is supposed to work. • Step 3: Focus the Evaluation Design determines the type of evaluation to implement, identifies the resources needed to implement the evaluation and develops evaluation questions. • Step 4: Gather Credible Evidence identifies how to answer the evaluation questions and develop an evaluation plan that will include, among others, indicators, data sources and methods, and the timeline. • Step 5: Justify Conclusions is about collecting, analyzing and interpreting the evaluation data. • Step 6: Ensure Use and Share Lessons Learned identifies effective methods for sharing and using evaluation results. INTRODUCTION & OVERVIEW Evaluation Standards The evaluation standards exist to guide and ensure that the evaluation is well­designed and meets the needs of programs. These are integrated in this manual as well. • Utility refers to designing an evaluation that meets the needs of the stakeholders. • Feasibility ensures that the evaluation is practical and realistic. • Propriety is concerned about the ethics of the evaluation such as human rights protection. • Accuracy ensures that the evaluation is producing valid and reliable findings. PRACTICAL USE OF PROGRAM EVALUATION AMONG STD PROGRAMS INTRODUCTION & OVERVIEW REFERENCES Patton, M.Q. (1997) Utilization­Focused Evaluation: The New Century Text. 3rd ed. Sage Publications, Thousand Oaks, CA. Smith, M.F. (1989). Evaluability Assessment A Practical Approach. Kluwer, Norwell, MA. U.S. Department of Health and Human Services. Centers for Disease Control and Prevention (2001). Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Office on Smoking and Health. http://www.cdc.gov/tobacco/ evaluation_manual/Evaluation.pdf U.S. Department of Health and Human Services. Centers for Disease Control and Prevention. Office of the Director, Office of Strategy and Innovation. Introduction to program evaluation for public health programs: A self­study guide. Atlanta, GA: Centers for Disease Control and Prevention, 2005. http://www.cdc.gov/eval/evalguide.pdf U.S. Department of Health and Human Services. Centers for Disease Control and Prevention (2001). Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Office on Smoking and Health. http://www.cdc.gov/tobacco/ evaluation_manual/Evaluation.pdf U.S. Department of Health and Human Services. Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR 1999;48(No. RR­11) ftp://ftp.cdc.gov/pub/Publications/mmwr/rr/rr4811.pdf U.S. Government Accountability Office (2005). Performance measurement and evaluation, GAO­05­739SP, http://www.gao.gov/new.items/d05739sp.pdf.