Glossary and Appendices SUGGESTED CITATION: Salabarría­Peña, Y, Apt, B.S., Walsh, C.M. Practical Use of Program Evaluation among Sexually Transmitted Disease (STD) Programs, Atlanta (GA): Centers for Disease Control and Prevention; 2007. Should you need help or more information regarding this manual or any specific tool, please contact CDC/DSTDP program evaluation staff at (404) 639­8276 or eval@cdc.gov. Glossary of Key Terms • Activities: Actual events that take place as part of your program (e.g., developing pamphlets, testing patients). • Audience: The individuals (such as your stakeholders and other evaluation users) with whom you want to communicate the results of an evaluation. • Case study: A type of evaluation design used to learn about a program as a whole and in its context. • Cluster evaluation: A type of evaluation design that looks at how well a set of related projects implemented at different sites achieves its goals and objectives. • Code book: A document detailing instructions on how the data for a specific evaluation is coded. It describes each code so that codes are applied to the data in a standardized way. • Coding: In quantitative analysis this is the process of arranging the data so that the computer can “read” the code and perform an analysis (e.g., if one of the variables is “sex” you might code this as 1 for “female” and 2 for “male”). In qualitative analysis, coding is used to reduce the data by organizing the text (data) into categories/themes. The codes are applied to text segments that match the theme(s) associated with the code. • Context: The setting and environmental influences in which your program operates (e.g., laws, regulations, political climate). • Data cleaning: The process of reviewing the data and preparing it for analysis by correcting erroneous data entry. • Data collection: The process of administering instruments and gathering responses. • Data interpretation: The process of determining the meaning or significance of evaluation findings to your program. • Data management: The control of data handling operations­­such as acquisition, analysis, translation, coding, storage, retrieval, and distribution of data. • Decision makers: Stakeholders in a position to do or decide something about your program. • Dissemination: The process of communicating the procedures, results, and the lessons learned from an evaluation. • Effectiveness: This relates to outcome evaluation, and it refers to the contribution a program makes to produce changes in the target population/organization. • Evaluation plan: A document that includes what an evaluation consists of (i.e., purpose/uses/users of the evaluation, program goals and objectives related with the evaluation, logic model, evaluations questions and design, data collection sources and methods, and dissemination plan) and the procedures that will help guide the implementation of evaluation activities to be undertaken by your program. • Executive summary: A 1­2 page summary of the full evaluation report. It provides a concise description of the evaluation activities, procedures, results, conclusions, and recommendations. Since this information can be extracted from sections of the full report, the summary is written last, but presented at the beginning of the report. • Experimental design: An evaluation design in which individuals, groups, programs or facilities (i.e., clinics) are randomly assigned to an intervention (program) group or a control (non­program) group. Because of random assignment, you reduce the chances of underlying differences between members of the control and intervention groups, which allows you to attribute change in outcomes to your program’s activities. • Fidelity: When your STD program or intervention is implemented as intended. • Focus group: A qualitative method used to collect data from a group of people (about 6 ­11) who meet for 1­2 hours to discuss their insights, ideas, and observations about a particular topic with a trained moderator. Participants are selected because they share certain characteristics (e.g., individuals who have been tested for syphilis, women in detention facilities) relevant to the evaluation. • Global Logic Model: Type of logic model which illustrates in a pictorial way how an entire program is supposed to work. • Goal: A broad statement related to the purpose of your program that states what the program will accomplish (the desired result). • Implementers: Stakeholders directly involved in undertaking program activities. • Incidence: New cases of a disease in a specific population within a defined time period. • Indicator: A specific, observable, and measurable accomplishment or change that shows whether progress has been made toward achieving a specific program output or outcome. • Individual interview: A data collection method which involves dialogue with individuals who are carefully selected for their personal experience and knowledge with the issues at hand. Since these interviews are conducted individually, they are useful when anonymity is an issue or when asking about sensitive topics so participants can feel free to express their ideas. • Inputs: Program resources (e.g., money, staff, materials). • Intermediate outcomes: Intended effects of your program in the target population/organization that takes longer than short­term outcomes to occur (e.g., changes in STD­related policy or in behavior of the target population). • Logic model: A picture of how a program/component/activity is supposed to work. • Long­term outcomes: Intended effects of your program in the target population/organization that may take several years to achieve, such as reduced disease transmission and incidence. • Mixed­method design: A methodological approach where you collect data from more than one source and/or through different methods. The advantages of using mixed methods include: increasing the cross­checks on the evaluation findings, examining different facets of the same phenomenon, and increasing stakeholders’ confidence in the overall evaluation results. An example of mixed methods is using both a focus group and a survey to understand a target population’s reluctance to use condoms. • Morbidity: Sickness or illness. • Nested logic model: A type of logic model, which depicts a component of a global logic model and describes the component in detail. • Non­experimental design: An evaluation design in which participant information is gathered either before and after the program intervention or only afterwards. A control or comparison group is not used. Therefore, this design does not allow you to determine whether the program or other factors are responsible for producing a given change. • Objectives: Measurable statements that describe the manner in which your program goals will be achieved. • Observation: A data collection method in which you take field notes on the behavior and activities of individuals or describe the environment while observing these in the field. For example, you might take notes on the behavior of gay men in bath houses as part of your data collection procedures, or take notes on how patients are treated by clinic staff, and use such information to further develop or improve your program. • Outcome evaluation: A type of evaluation that determines the effects of your program activities in the target population (e.g., changes in: knowledge, attitudes, beliefs, skills) or organization. The outcome components of a logic model (the right side) are used to plan an outcome evaluation. • Outcome evaluation questions: Evaluation questions concerned with your program outcomes. Such questions can address whether the desired outcomes of your program were achieved, and whether your program produced changes in the target population(s)/organization. • Outcome indicators: These measure whether progress was made toward achieving your short­term, intermediate, or long­term outcomes. • Outcome objectives: Measurable statements specifying the intended effect of your program in the target population(s)/organization. • Outcomes: Intended effects or changes in the target population(s)/organization that result from your program. • Outputs: The direct products of your program activities or services delivered (e.g., pamphlets developed, patients tested). • Participants: Stakeholders being served or affected by your program. • Participatory evaluation: Approach which involves stakeholders in all aspects of the evaluation process (i.e., design, question development, data collection, analysis, reporting, and use of results for decision making). • Partners: Stakeholders who actively support and/or who have invested in your program. • Performance measures: A set of indicators developed by CDC’s Division of STD Prevention with input from members of NCSD, state representatives of NCSD member grantees, and seven project areas where the measures were pilot­tested. Each project area receiving CDC funds is required to report on the measures (indicators) that apply to them. • Population at risk: Groups that have a high probability of developing an STD or a related condition. • Pre/post design: A non­experimental design where measures (data collection) are taken from the target population(s) before and after the activity/intervention. • Post­only design: A non­experimental design where measures (data collection) are taken from the target population(s) after the activity/intervention. Since this is a non­experimental design, it does not involve comparison/control groups. • Prevalence: Number of cases of a disease in a population at a given point in time. • Primary data: Data directly obtained by your program (e.g., surveillance, number of sex partners of syphilis cases collected through DIS interviews). • Process evaluation: Also referred to as implementation evaluation, is a type of evaluation that determines whether your program and its activities are implemented as intended and why? /why not? Information gathered is used for refining or modifying these activities and related procedures. The inputs, activities, and outputs of a logic model (the left side) are used to plan a process evaluation. • Process evaluation questions: Evaluation questions concerned with the implementation of your program or specific program component/activity. You develop process evaluation questions to examine the development and delivery of services and activities of your program, as well as its operations and administrative functions. • Process indicators: Indicators that measure whether progress is made toward achieving implementation fidelity by your program. These indicators measure whether your program is functioning as planned, and relate to the outputs in your program logic model. • Process objectives: Measurable statements describing your program activities and the actions involved in their implementation. • Purpose of evaluation: General intent of the evaluation (e.g., to fine­tune program operations). • Qualitative data: Detailed/narrative information that allows an in­depth understanding of a topic/issue/population. An example of qualitative data is the answers representatives of a CBO would provide when asked for their thoughts on how to reach high­risk adolescents. • Qualitative design: Evaluation designs used to capture the target population’s perceptions, opinions, experiences about your program activities, and/or to better understand a programmatic aspect in more depth by telling how and what happened, and when and to whom. • Qualitative methods: Data collection methods used to gather narrative data to better understand the experiences of the target population and how a program activity works. • Quantitative data: Numerical information. An example is data that identify the number of times (e.g., 1, 2, 3, 10) each client has visited your clinic within the last year. • Quantitative design: A type of evaluation design which relies on examining quantitative data obtained from such instruments as closed­ended surveys. This design option observes and measures information numerically, and employs statistical procedures. • Quantitative methods: Data collection methods that are used to collect numerical data. An example is the use of a survey that queries respondents about their sexual history using closed­ended questions in which numbers can be assigned to responses (e.g., number of sexual partners, frequency of condom use). • Quasi­experimental design: A type of evaluation design that makes comparisons between groups (intervention and control), but does not involve random assignment to intervention and control groups. It may be possible to attribute changes to the program if you can document with baseline information that the two groups are similar prior to receiving the program. • Reliability: The consistency of a measure or question in obtaining very similar or identical results when used repeatedly. • Risk factor: A factor that increases a person's chances of getting a disease or condition (e.g., having multiple sexual partners, lack of access to health care). • Secondary Data: Information your program can use that has been collected by someone else (e.g., national data). This may include epidemiological data, socio­demographics, health risk behaviors, and health policies. • Short­term outcomes: Immediate effects of your program in the target population/organization (e.g., changes in knowledge, attitudes, skills, awareness, or beliefs). • SMART: An acronym describing the criteria used to write objectives that are Specific, Measurable, Achievable, Relevant, and Time­bound. • Stage of development: The level of maturity of your program, which influences the type of evaluation you conduct (e.g., planning, implementation and maintenance stages). • Stakeholders: Individuals or organizations directly or indirectly affected by your program and/or the evaluation results (e.g., STD program staff, family planning staff, representatives of target populations). • STD­related risk factors: Specific behaviors, attitudes and/or limited knowledge that put individuals at risk of STDs. • Surveillance data: Data collected in an ongoing, systematic way regarding agent/hazard, risk factor, exposure, or health event. Surveillance data are essential for the planning, implementation, and evaluation of public health practice. • Survey: A method of collecting information that can be self­administered, administered over a telephone, administered using a computer or administered face­to­face. Surveys generally include close­ended questions that are asked to individuals in a specific order and provide multiple choice or discrete responses (e.g., “Have you been tested for syphilis in the last 6 months?”). • Users of an evaluation: The specific persons/organizations that will employ the evaluation findings in some way (e.g., STD Director, CBO, funder). • Uses of an evaluation: The specific ways that program staff and other stakeholders will apply what is learned from the evaluation (e.g., change STD clinical practice, inform STD prevention policy). • Validity: The extent to which a question actually measures what it is supposed to measure. For example, a question that asks how often an individual uses a condom is valid if it accurately measures the actual level of condom use; it is not valid if instead it measures the extent to which an individual realizes that s/he should wear a condom. List of Appendices Appendix A Evaluation Designs Appendix B Syphilis Case Illustrating the Application of the Manual Appendix C Sample Logic Models of STD Programs • California DHS/STD Control Branch and California STD/HIV Prevention Training Center • Idaho Department of Health and Welfare, STD/AIDS Program • Forsyth County’s Syphilis Elimination Project (North Carolina) • Michigan Department of Community Health, STD Program Appendix D Sample Evaluation Plans of STD Programs • California DHS/STD Control Branch and California STD/HIV Prevention Training Center • Idaho Department of Health and Welfare, STD/AIDS Program • Forsyth County’s Syphilis Elimination Project (North Carolina) • Michigan Department of Community Health, STD Program APPENDIX A Evaluation Designs Appendix A Evaluation Designs INTRODUCTION The design of your evaluation influences the types of conclusions you can make from your findings. You select the evaluation design(s) once you have the final list of evaluation questions, have classified them as either process or outcome, and have determined the resources available for your evaluation activities. WHAT IS AN EVALAUTION DESIGN? An evaluation design is the selection of procedures used to demonstrate that a program is worthwhile, effective and efficient. Also, it allows you to draw, with varying degrees of certainty, specific conclusions as accurately as possible, and to determine the limitations of the evaluation. WHAT TYPES OF DESIGN CAN YOU USE FOR AN EVALUATION? The evaluation design should be based on the evaluation questions, stakeholders’ needs, and available resources, including time and expertise. Generally evaluation questions that look at monitoring outcomes or measuring change in the target population as a result of a program/intervention tend to apply non­experimental/observational or quasi­experimental/experimental designs, respectively. However, if the evaluation question looks at how the program is being implemented (barriers and facilitators of program operations/implementation) or whether your program is reaching the appropriate target population, qualitative methods alone or in combination with quantitative methods will tend to be applied (see Tool 4.2). Non­experimental/observational designs quantify progress toward achieving your outcomes or change in the target population without using comparison groups (e.g., cross­sectional, longitudinal). You can collect information from program participants before and after (pre/post) an activity or after the activity only (post only). With this design you cannot determine with certainty whether your program or other factors are responsible for producing change, but non­experimental designs can give you an idea if your activity(ies) are accomplishing what you intend. These also require less time and money to implement compared to experimental/quasi­experimental designs. For instance, you designed a health education activity about chlamydia (Ct) for females at a family planning clinic. You are interested in monitoring progress toward achieving one of your program outcomes (i.e., increased knowledge of target population regarding Ct symptoms, prevention, and where to go for screening/testing). You can consider a non­experimental design where you will administer a questionnaire before and after the education activity. The results can give you an idea of the progress made toward achieving your outcome, and may give you sufficient information to improve your program and persuade stakeholders that the program is making a contribution. However, you will not be able to determine if the results were due to your activity or to other issues (e.g., participants attended another Ct educational activity somewhere). If you want to determine if the outcomes achieved can be attributed to your program—called “causal attribution,” you can use “experimental” or “quasi­experimental” designs. However, based on the cost, time, and controls inherent in these designs, they are not always feasible for evaluating STD program activities. Below you will find information on these types of designs. Experimental designs can produce the strongest evidence of program effectiveness, primarily because groups or individuals (e.g., clinics, groups of patients, individual clients, or patients) are randomly assigned to either an intervention program group or a control (non­program) group. All groups/individuals have equal probability of being selected to the program or non­program group. Randomization increases the likelihood that any changes in your target population(s) can be attributed to the program. Nevertheless, experimental designs are rarely practical for STD and public health programs due budget, staffing and time constraints. Also, this design may raise ethical issues because clients who are in the control group do not receive the designed intervention, which often includes more beneficial or enhanced services. Quasi­experimental designs can be used when you choose to evaluate program outcomes, but randomization is not possible/difficult or you do not have the resources to do an experimental design. As with experimental designs, one group of individuals is chosen to receive the program/activity, and another group of individuals serves as the comparison group and is not engaged in the program activities. However, you are unable to randomly assign individuals into groups to participate in the evaluation and the groups you select may not be equivalent in key characteristics, such as demographic. It is, therefore, important that you document how the groups are similar and how they differ on any key factors relevant to your program. Because the groups are not identical, you are unable to attribute any changes in the intervention group solely to your activity. However, the more the two groups are similar, the more confident you can be that your intervention activities are leading in the desired effect. Consider for instance a new health education intervention you have developed. You want to evaluate the effectiveness of this intervention in changing the attitudes and behaviors about STD prevention and transmission in the target population. Since you do not have enough resources to conduct a full experimental evaluation, you design a quasi­experimental evaluation, using two STD clinics. In one you would provide the usual health education activities, and in the other you would conduct the new education and counseling activities. You would collect information from the two clinics before and after the educational activities to determine if there was a change in patients’ attitudes and behaviors pertaining to STDs, and if these changes were different between clinics. In this case you will have more certainty than the results may have been due to your program compared to a non­experimental design. Qualitative methods help examine evaluation issues/questions in depth and rely on open­ended questions to elicit detailed information from a limited number of individuals. Some examples of qualitative methods include in­depth interviews, focus group, and observation (see Tool 4.2). You can utilize these methods if you are interested in learning how your program operates and why; or if you want to capture participants’ stories, perceptions and experiences with your program or specific program activities, or if you want to measure whether a program is reaching the appropriate target population, or whether your program activities are being implemented as planned. Qualitative designs/methods can help answer the “how” and “why”. KEY TERMS Experimental design: An evaluation design in which individuals, groups, programs or facilities (i.e., clinics) are randomly assigned to an intervention (program) group or a control (non­program) group. Because of random assignment, you reduce the chances of underlying differences between members of the control and intervention groups, which allows you to attribute change in outcomes to your program’s activities. Mixed­method design: A methodological approach where you collect data from more than one source and/or through different methods. The advantages of using mixed methods include: increasing the cross­checks on the evaluation findings, allowing you to examine different facets of the same phenomenon, and increasing stakeholders’ confidence in the overall evaluation results. An example of mixed methods is using both a focus group and a survey to understand a target population’s reluctance to use condoms. Non­experimental design/observational: An outcome evaluation design in which participant information is gathered either before or after the program or only afterwards. A control or comparison group is not used. Therefore the design does not allow you to determine whether the program or other factors may be responsible for producing a given change. Qualitative design: A type of evaluation design used to capture the target population’s perceptions, opinions, experiences about your program activities, and/or to better understand a programmatic aspect in more depth by telling how and what happened, and when and to whom. Quasi­experimental design: A type of evaluation design that approximates experimental design, but with no random assignment to groups (intervention and control), but does not involve random assignment to intervention or control groups. It may be possible to attribute changes to the program if you can document with baseline information that the two groups are similar prior to receiving the program. REFERENCES California Department of Health Services’ Tobacco Control Section. Using Case Studies to do Evaluation. Retrieved January 1, 2005, from (www.dhs.cahwnet.gov/ps/cdic/ccb/TCS/documents/ProgramEvaluation/pdf). Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. MMWR Recommendations and Reports, 48(RR­11). Retrieved January 5, 2005, from http://www.cdc.gov/mmwr/preview/ mmwrhtml/rr4811a1.htm Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Adolescent and School Health. (2004). Evaluation steps tools. Unpublished manuscript. MacDonald, G., Starr, G., Schooley, M., Yee, S. L., Klimowski, K., & Turner, K. (2001, November). Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention. Retrieved October 17, 2004, from http://www.cdc.gov/tobacco/ evaluation_manual/Evaluation.pdf Patton, M.Q. Qualitative Research and Evaluation Methods. (3rd Edition): Sage Publications, Inc. Preskill, H. & Russ­Eft, D. (2005). Building Evaluation Capacity: 72 Activities for Teaching and Training. Thousand Oaks, CA: Sage. Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A systematic approach (6th ed.). Thousand Oaks, CA: Sage. Savela, P.D. and McDermott (1993). Health Education Evaluation and Measurement. Brown & Benchmark Publishers. Stecher, B. M., & Davis, W. A. (1987). How to focus an evaluation. Newbury Park, CA: Sage. Thompson N.J., McClintock H.O. (1998). Demonstrating your program’s worth: a primer for evaluation on programs to prevent unintentional injury. Atlanta: GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. W.K. Kellogg Foundation (2001, December). Logic model development guide. Retrieved January 8, 2005, from http://www.wkkf.org/Programming/ ResourceOverview.aspx?CID=281&ID=3669 W.K. Kellogg Foundation. (1998, January). W.K. Kellogg Foundation Evaluation Handbook. Retrieved October 17, 2004, fromhttp://www.wkkf.org/ Programming/ResourceOverview.aspx?CID=281&ID=770 APPENDIX B Syphilis Case Illustrating the Application of the Manual Appendix B Syphilis Case Illustrating the Application of the Manual THE SITUATION After analyzing syphilis morbidity reports and interview records, STD officials in the city of Chancri­La noticed an increase in the number of syphilis cases among men who reported having sex with men (MSM). From 1999 to 2002, the number of MSM cases had gone up, as well as the percentage of MSM cases. In 1999, there was only 1 MSM case, which represented .9% of the syphilis cases in males. By 2002, the number of MSM cases had increased to 14, and represented 29.2% of their male cases. Further analysis revealed that the cases were not concentrated in one geographic part of the city, based on the males’ residences. However, through interviews conducted by the Disease Intervention Specialists (DIS), the STD officials learned that most of the males socialized in the same area. ACTIONS TAKEN A DIS was already screening sporadically at a gay bar. To address this emerging problem, STD officials initiated meetings with six community­based organizations (CBOs) that work with the MSM community. Together, they designed a plan of action to implement jointly. One of the activities implemented was syphilis screening in different venues (i.e., bathhouses, gay bars, CBOs, mobile unit, and a gay parade). The STD director and program staff were interested in determining which of these screening approaches was more successful in reaching the target population. The following illustrates the steps involved in designing and implementing this evaluation. Step 1: Engaging Stakeholders in the Evaluation. 1.) Who were the stakeholders in this scenario? • Implementers: STD staff, CBOs staff • Decision Makers: HD management and STD director • Participants: Representatives of the target population ( MSM) • Partners: Businesses (i.e., bathhouses, gay bars), parade organizers • State Laboratory 2.) How was stakeholders’ input obtained for the evaluation? The STD program staff organized a meeting to brief other stakeholders on the screening activities implemented and importance of conducting an evaluation of such initiative. Also to: • obtain stakeholder input, • determine stakeholder needs, interests, and concerns about the evaluation, • plan stakeholder involvement in the evaluation and what they hope to learn, and • plan methods of keeping stakeholders informed during evaluation. 3.) How was stakeholders’ involvement retained throughout the evaluation? Stakeholders’ roles and responsibilities were discussed and agreed upon. All stakeholders reviewed documents pertaining to the evaluation (e.g., evaluation plan, instruments, analysis, and report), and decisions were made by consensus. The STD staff agreed to send a monthly email summarizing the progress of the evaluation as well as STD­related information affecting MSM to all stakeholders. In addition, stakeholders participated in a monthly meeting through the end of the evaluation. Step 2: Describing the Program Considering the importance of mutual understanding and that the evaluation involves individuals who may be not be familiar with the STD program and the screening activity, the STD program staff shared some background information with the stakeholders at one of the monthly meetings. The project area had been involved in needs assessment activities when developing the Comprehensive STD Prevention System (CSPS) grant application and shared information on the syphilis outbreak among MSM as well as behavioral data. They also reiterated that the purpose of the entire STD program was to address the STD needs of the project area, with emphasis on MSM. Also, the STD director provided the following information about the screening activity to be evaluated. 1.) What was the goal of the screening activity? • Reduce syphilis in at­risk MSM living in Chancri­La. 2.) What were the screening related objectives? • By December 2002 the STD program staff will implement syphilis screening in 4 venues frequented by MSMs. • By December 2003, the number of at­risk MSM screened for syphilis will increase from X to Y. 3.) What were the resources? • HD staff • CBO staff • HD$ • CBO $ • Access to four venues • Screening equipment and supplies • Mobile Van • Laboratory services • Condoms 4.) What activities were being conducted with those resources? • Training of CBO staff to provide information on syphilis screening • Monthly screenings in 3 venues (bathhouses, gay bars, and through mobile van). • Screening at Gay Pride parade • Distribution of condoms • Request for assistance from local businesses frequented by MSM (e.g., permission to park mobile van in their parking lots). Stakeholders wanted to better understand how these screening components were going to fit with other STD program activities and lead the way to the results they were expecting. They decided to develop a logic model of the screening activity (see Logic Model on next page). Step 3: Focusing the Evaluation Design Once stakeholders understood, via the logic model, the connections between the screening activity components and corresponding outputs and outcomes, they focused the evaluation by determining the uses and users of evaluation results, the questions they wanted the evaluation to answer, and the evaluation design(s) to be applied. The following shows the decision making process. 1.) What is the purpose of the evaluation? Stakeholders agreed that they wanted to gain insight on the screening activities implemented and determine which venue(s) was the most successful in reaching the target population (i.e., at­risk MSM). 2.) Who will use the evaluation results? All of the aforementioned stakeholders, most particularly the STD program staff. Logic Model: Syphilis Outbreak in Chancri­La Situation: Outbreak of syphilis in MSM community. Screening initiated in 4 venues. INPUTS ACTIVITIES OUTPUTS OUTCOMES • Staff• Funding• Screening supplies•Condoms•Laboratory•Appropriatemedication• Mobile Van•Educationalmaterials • Train CBO staff onsyphilis screening• Meet withbathhouse and gaybar management• Conduct screeningsin:– Bathhouse– 2 Gay Bars– Mobile van– Gay Pride parade• Treat syphilis cases.• Condomdistribution• Distribution ofeducationalmaterials • Meetings wereconducted with 2CBOs and gay barmanagement• Training deliveredto CBO staff onsyphilis screeningand treatment• Syphilis screeningconducted on amonthly basis in:– Bathhouses– 2 Gay Bars– Mobile van­various locations• Syphilis screeningat the annual GayPride parade• MSM screened andinfected individualstreated• Condoms andeducationalmaterialsdistributed Short Intermediate Long • Increased use ofsyphilis screeningamong at­risk MSM• Increasedawareness ofsyphilis amongMSM communityand CBO staff • Increased accessof MSM tosyphilis and STDprevention andcontrol services• Increased condomuse among at­risk MSM • Reduced riskbehaviors• Reduced incidenceof syphilis andother STDs. 3.) How will the evaluation results be used? It was determined that the results of the evaluation would be used to reduce/expand the screening activity locations. 4.) What questions do you want the evaluation to answer? Stakeholders submitted all possible questions they wanted the evaluation to answer which included: • Was the screening activity implemented as planned in the different venues? – What are the barriers and facilitators in carrying out syphilis screening in the different venues? • Which venue(s) is(are) more effective in reaching and screening at­risk MSM? – Which venue is more acceptable for syphilis screening among at­risk MSM? – How many MSM were screened, by venue? What was the number of new positives found, by venue? If not as expected, why? – Where should screenings be conducted, and when? • Where should condoms be distributed? – Were the condoms distributed to the establishments where cases are found? (Right number and to the right places.) – Were these the appropriate places to distribute condoms? • Was the number of cases reduced to the degree planned? • Did awareness about the syphilis outbreak increase among at­risk MSM and CBO staff? • Did awareness of prevention measures among at­risk MSM and CBO staff increase? 5.) Since results of the evaluation needed to be submitted within 3 months, and the STD program and CBOs did not have the resources to answer all these evaluation questions, stakeholders decided to focus on the following group of questions by their level of importance. The questions were also classified as either process or outcome. • Was the screening activity implemented as planned in the different venues? (process) – What are the barriers and facilitators in carrying out syphilis screening in the different venues? (process) • Which venue is more effective in reaching and screening at­risk MSM? – Which venue is more acceptable for syphilis screening among at­risk MSM? (process) – How many MSM were screened? What was the number of new positives found? If not as expected, why? (process) – Where should screenings be conducted, and when? (process) 6.) Which evaluation design is most appropriate to guide data collection for the evaluation questions given the available resources (budget, time, staffing)? Since the purpose of the evaluation was to make programmatic decisions about the screening venues as opposed to 1) determining the effects of the screening activity in the target population or 2) if these were due to the screening activity, quasi­experimental and experimental evaluation designs were out of the question. So stakeholders, along with a professional evaluator from the health department (HD), selected non­experimental and qualitative designs to guide the data collection process pertaining to the evaluation questions. EVALUATION QUESTIONS DESIGN RATIONALE • Was the screening activity implemented as planned in the different venues? Qualitative Design • Used to record (observe) screening activities as they occurred in the four venues and to determine if they were implemented with fidelity. • What are the barriers and facilitators in carrying out syphilis screening in the different venues? • Used to obtain in­depth understanding of perceived factors that either hindered or facilitated the implementation of syphilis screening in the different venues among implementers and business owners. • Which venue(s) is(are) more effective in reaching and screening at­risk MSM? This question depends on the next three questions to be answered • Which venue is more acceptable for syphilis screening among at­risk MSM? • How many MSM were screened? What was the number of new positives found? If not as expected, why? Qualitative Design • Used to obtain the opinions of a sample of individuals at the screening venues regarding factors that motivated them to accept being screened in the venue, thoughts re the other venues, other venues that still need to be reached. • Used to count how many MSM were actually screened per venue and how many of these were active cases of syphilis. • Where should screening be conducted, and when? Non­experimental post only design • Used after the screening takes place to determine where and which time/days of the week received the highest number of at­risk MSM being screened. Step 4: Gathering Credible Evidence Since all the evaluation questions measured process of the screening activity, stakeholders reviewed the logic model to identify corresponding outputs. Then, they selected the indicators to measure progress of the syphilis screening activity in the different venues, where/from whom data would be obtained for each indicator, and the corresponding data collection method(s). The following reflects the decisions made accordingly. To help maintain confidentiality of respondents it was agreed that (1) data collectors would strip all identifiers from the data gathered (observation logs, interviews, focus groups), and (2) secure it in the Evaluator­HD’s office. Stakeholders organized all the decisions made up to that point and developed an evaluation plan consisting of a narrative component (stakeholders, rationale, purpose, goal/objectives to be addressed in the evaluation, logic model, users/uses of the evaluation, dissemination approach, timeline, and budget) and a matrix (evaluation question, design, indicators, data sources/methods, person responsible and schedule). Then, the evaluators (HD/CBO) and STD staff drafted all evaluation instruments and protocols, gave these to other stakeholders for their input, and incorporated changes. Instruments were also pilot­tested. EVALUATION PLAN MATRIX EVALUATIONQUESTIONS PROCESSINDICATORS DATA SOURCE DATA COLLECTIONMETHOD DATA COLLECTION PROCEDURES DATA ANALYSIS(SEE STEP 5) PERSON RESPONSIBLE SCHEDULE Q1. Was thescreening activityimplemented asplanned in thedifferent venues? • Number ofimplementers whofollowed thescreening procedureswith 100%consistency in thefour venues. • Observations(implementers’performance duringscreening) • Observation (log) • Evaluator from HD • Collected for eachimplementer on threeoccasions duringevaluation; final bydd/mm/yy • Quantitative(descriptive) • Type of changes madeto the screeningactivity in the fourvenues from the timeit started. • Individuals(implementers andSTD director) • Interview(individual/open­ended) • Evaluator from HD • Collected bydd/mm/yy • Review interviews,identify commonthemes and groupthem by data sources. Q2. What are thebarriers andfacilitators in carryingout syphilis screeningin the differentvenues? • Barriers andfacilitators identifiedby implementers,business owners anddecision makers rethe implementation ofthe screening activity.• Type of challenges rethe implementation ofsyphilis screening atthe different venuesreported byimplementers. • Individuals(implementers,business owners,decision makers) • Interview (focusgroups) • Evaluator from oneof the CBOs • Collected bydd/mm/yy• Collected bydd/mm/yy • Review transcriptions,identify commonthemes, group themby data sources, andidentify any patternsacross and withinsources. Q3. Which venue ismore acceptable forsyphilis screeningamong at­risk MSM? • Factors thatmotivated MSM toaccept screening in avenue and theiropinion on the otherthree venues. • Individuals (sampleof MSM as they arescreened at thedifferent venues) • Interview(individual/open­ended) • Evaluator from oneof the CBOs • Review transcriptions,identify commonthemes, and identifyany patterns acrossrespondents. continued EVALUATIONQUESTIONS PROCESSINDICATORS DATA SOURCE DATA COLLECTIONMETHOD DATA COLLECTION PROCEDURES DATA ANALYSIS(SEE STEP 5) PERSON RESPONSIBLE SCHEDULE • Type ofrecommendationsprovided by MSMabout other venues,which still need to bereached. Q4. How many MSMwere screened? Whatwas the number ofnew positives found?If not as expected,why? • Number of monthlysyphilis screeningamong MSM atbathhouses, gay bars,and mobile van.• Number of syphilisscreenings conductedamong MSM at thegay pride parade.• Number of screeningtests which turnpositive. • Documents(implementers log,lab records) • Documents Review • DIS• Evaluator from HD • Collected bydd/mm/yy of eachmonth of theevaluation• Collected within 4days of Gay PrideParade• Collected within 7days of parade andmonthly for the othervenues • Number of screeningswill be comparedacross venues andwith expectednumbers set at thebeginning of theactivity. Q5. Where shouldscreening beconducted, and when? • Venue(s) yielding themost number of testsand new positives. • Documents(implementers log,lab records) • Documents Review • Collected bydd/mm/yy • Will use findings fromQ2, Q3, Q4. Step 5: Justifying Conclusions While data collection was taking place, stakeholders met and determined how the data from the indicators were to be analyzed. The evaluation plan was revised to include the data analysis process (as presented in the last column of the previous table), and the schedule and person responsible for conducting the analyses. The following illustrates the main findings of the evaluation, organized by evaluation question and corresponding indicators. Evaluation Question: Was the screening activity implemented as planned in the different venues? INDICATOR SUMMARY OF FINDINGS • Number of implementers who followed the screening procedures with 100% consistency in the four venues • Observations of all 7 staff screening individuals revealed that most of them (i.e., 5) followed the screening procedure all the time in the four venues. It was also found that the 2 staff not following the procedures were relatively new, not only to STD, but to the screening activity and protocols. Due to time constraints of the STD field supervisors, the training received had not included practice sessions. • Type of changes made to the screening activity in the four venues from the time it started. • Interviews with implementers and STD director indicated that in the past year, all the monthly screenings were held at the bath house, but only for the first 6 months at one of the bars (because it closed), and only three times at the second bar. Monthly screenings were held every month in the mobile van, but not in the locations they hoped for. Screening was held all day at the Gay Pride parade. • Mobile van locations had to change twice because of complaints from neighborhood residents. Two locations that had been chosen originally had no parking available for the van and were removed from the list. APPENDIX B Evaluation Question: What are the barriers and facilitators in carrying out syphilis screening in the different venues? INDICATOR SUMMARY OF FINDINGS • Barriers and facilitators identified by implementers, business owners and decision makers re the implementation of the screening activity. • Screenings needed to be held at night, and it was hard to get staff to work those hours. • STD program staff needed commercial driving licenses to drive the mobile van; only one staff person had that license. • When interviewed, the bar managers expressed a fear of revenue loss when patrons were away from their barstool, or tables, to getting tested. They also feared poor bar attendance if the screening events were advertised, since this might keep some patrons away. One bar closed halfway through the year. So, even though they had agreed to participate, “something” always seemed to come up on the night the screening was scheduled, so it had to be cancelled. • In general, MSM claimed they were more interested being tested for HIV than for syphilis because HIV status was more important than syphilis, and they did not believe syphilis was present in their community. • Having insufficient time to create attractive materials for the Gay Pride Parade to encourage MSM to be tested for syphilis. • Facilitators included: (1) each facility having a room that was private, and could be used for screening, (2) having a contact from one of the CBOs work with the organizers of the Gay Pride parade to allow advertising and testing for syphilis, and (3) the bathhouse manger encouraged participation in the screening and advertised when the screenings would be held. • Type of challenges re the implementation of syphilis screening at the different venues reported by implementers. • Gay bar owners feared that their clients were going to identify their locales with infections or consider it a “a dirty place” and lose clients as a result. • The mission of the gay bar was socialization; to introduce screening for a sexually transmitted disease was not compatible with their mission. • Lack of knowledge and experience of half of the screening staff with the MSM community. • Getting permission to draw blood at a public gathering (Gay Pride). • Neighborhood complaints about the noise that the van produced resulted in much staff time being spent responding to complaints, and to relocating the van. Evaluation Question: Which venue is more acceptable for syphilis screening among at­risk MSM? INDICATOR SUMMARY OF FINDINGS • Factors that motivated MSM to accept screening in a venue and their opinion on the other three venues. • Results of interviews with MSM indicated more willingness to be screened for syphilis at the bathhouse than at the gay bars. Since there is more sexual activity going on in the bathhouse than in the bars, they said they feel more at greater risk for syphilis and other STDs. • Previous syphilis infection or knowing someone who had syphilis was another motivator. • Ease of access/quickness of both the screening test and test results. • Gay Pride testing was good for visibility; however, most MSM surveyed there declined testing if it involved waiting 30 minutes or more. • Important to have a consistent schedule for mobile van so that clients could locate van easily to obtain results. • Type of recommendations provided by MSM about other venues, which still need to be reached. • Interviewees suggested having screening activities or arranging it with those who have “circuit parties”. • Another suggestion was to include an ad in the local gay newspaper and in gay websites about the syphilis outbreak and where to be screened/treated. APPENDIX B Evaluation Question: How many MSM were screened? What was the number of new positives found? If not as expected, why? INDICATOR SUMMARY OF FINDINGS • Number of monthly syphilis screening among MSM at bathhouses, gay bars, and mobile van. • Bathhouses: 250 men approached; 150 screened • Gay Bars: 500 men approached; 150 screened • Mobile Van: 1000 men approached; 300 screened • Number of syphilis screenings conducted among MSM at the gay pride parade. • Gay Pride: 200 men approached; 30 screened • Number of screening tests which turn positive. • Bathhouses: 5 positive • Gay Bars: 2 positive • Mobile Van: 1 positive • Gay Pride: 0 positive Evaluation Question: Where should screenings be conducted, and when? • Venue(s) yielding the most • Highest number of at­risk MSM tested: Mobile van number of tests and positives. • Highest percentage of active syphilis cases: Bathhouses INTERPRETATION OF FINDINGS Stakeholders received these findings and met to interpret them. It was concluded that the implementation of the screening activity was facilitated by: • Having most of the screening staff follow the screening protocols. • Having available private rooms to conduct the screening at each venue. • Partnering with a proactive bath house manager (who agreed to advertise screening). • High self­perceived risk for syphilis among bathhouse clients. • Previous experience with syphilis among MSM. • Ease of access/quickness of both the screening test and test results. • Being visible at the Gay Pride Parade. • Increasing access of at­risk MSM to syphilis screening via a mobile van. There were factors that affected screening implementation such as: • Pre­planning issues – The need for more training on the implementation of screening protocols for new staff. – Van locations with no parking available. – Limited number of staff with commercial driving license to drive the van. – Competing demands among screening staff, making it difficult to work after hours. – Neighborhood complaints about the noise produced by the van. – Lack of attractive advertising materials. • Business limitations – Gay bar being closed. – Fear by gay bar managers having their business being perceived as “dirty” if STD testing was necessary. – Conflict between aim of bar (socialization) and distracting public health activity. – No time of gay bar managers to advertise screening. • Target population’s low­perceived risk for syphilis and lack of awareness about the outbreak among gay bar clients, and having to wait more than 30 minutes to be screened at the parade. RECOMMENDATIONS Based on the findings the following were recommended: • To conduct booster sessions on screening protocols with all screening staff and couching from field supervisors with new staff. • Continue using the mobile van for syphilis screening to reach at­risk MSMs with the following recommendations: – Before using the van in residential areas, obtain permits in advance to locate the van. It is important to meet with the neighborhood leaders to make them aware of the magnitude of the outbreak and the importance of conducting screening. Build a relationship with them to gain access and acceptability into the community, and request their input on where/when to place the van. – Increase the number of screening staff with commercial drivers licenses by given those interested time to obtain the training and license and incentives for doing so (e.g., acknowledgement at staff meeting). – Have a consistent schedule for mobile van so that clients could locate van easily to obtain results. – Make sure that the waiting time for screening is less than 30 minutes. • Keep strengthening the relationship with the bathhouse manager so screening activities can continue. • Since gay bars do not seem to be the most successful places for syphilis screening, keep providing them with prevention materials and explore other venues such as “circuit parties”. • Develop monthly schedules in advance, including the exact times in which screening activities will be held, so that screening staff can make arrangements to work after hours, if needed. • Consult with the communication or health education specialists within the health department and CBOs to develop attractive material to advertise screening times/places in the gay media and establishments, as well as places that MSM tend to visit. Step 6: Sharing Lessons Learned and Ensuring Use of Findings The evaluation findings were shared with pertinent audiences and some of the evaluation recommendations have been implemented by the STD program and other stakeholders. The following shows who received the evaluation results and in which format, how the STD program ensured that the evaluation results would be used for decision making, and which decisions have been implemented. 1.) Who received information on the evaluation results and in which format? • HD and STD director (executive summary and full evaluation report) • STD program staff (executive summary and oral presentation) • CBOs (executive summary and oral presentation) • MSM Leaders, represented on the Stakeholder group (oral presentation and fact sheet) • MSM Community (fact sheet) • CDC (oral presentation at the National STD Conference) • NCSD (executive summary, fact sheet) • Businesses (i.e., bathhouses, gay bars) and parade organizers (oral presentation, executive summary and fact sheet) 2.) How were stakeholders kept informed on the evaluation? • Regular monthly meetings • E­mail • Final report 3.) What steps were taken to ensure use of the evaluation findings? • Stakeholders helped draft recommendations based on the evaluation findings. • STD director proposed recommended changes to HD management, MSM leaders, and CBOs. • Follow­up meetings were conducted with those who can make decisions regarding the implementation of syphilis screening in different venues. 4.) How were evaluation findings used? • Day and times of mobile van screening were adjusted to meet increased demand at peak times for two venues. • One venue was discontinued as a result of the analysis of volume of positive test results (i.e., gay bars). • As a result of discovering that the mobile van driver needed a commercial license, the STD program identified several staff willing to drive the van and arranged commercial driver training for those staff. Four staff subsequently received their commercial driver’s license. • The STD program revised the plan to incorporate meetings to advise local law enforcement about the mobile van activities. APPENDIX C Sample Logic Models of STD Programs Appendix C Sample Logic Models of STD Programs CALIFORNIA DHS/STD CONTROL BRANCH AND CALIFORNIA STD/HIV PREVENTION TRAINING CENTER Goal: To reduce prevalence of STDs among HIV+ MSM in California IDAHO DEPARTMENT OF HEALTH AND WELFARE, STD/AIDS PROGRAM LOGIC MODEL FOR SYPHILLIS REPORTING SYSTEM Goal: Improve the quality of syphillis interviewing and reporting FORSYTH COUNTY’S SYPHILIS ELIMINATION PROJECT (NORTH CAROLINA) Goal: To reduce the incidence of syphilis among high­risk African Americans male and female, 18­45 years of age in Forsyth County. MICHIGAN DEPARTMENT OF COMMUNITY HEALTH, STD PROGRAM Goal: Increase Chlamydia screening in females 15–24 years old at an emergency department (ED) in Detroit. APPENDIX D Sample Evaluation Plans of STD Programs Appendix D Sample Evaluation Plans of STD Programs CALIFORNIA DHS/STD CONTROL BRANCH AND CALIFORNIA STD/HIV PREVENTION TRAINING CENTER Susan Watson, MPH and Michael McElroy, MPH Evaluation Plan for California STD Toolkit EVALUATION COMPONENT ACTIVITY List the STD program activity to be evaluated. • Program activity to be evaluated: Toolkit for clinical providers of HIV+ MSM aimed at facilitating an increase routine STD screening and awareness of health needs. • Stakeholders involved in the evaluation: Program manager, STD Program staff, CDC/AED, Medical advisory board, MSM health service providers and clinic staff, DIS staff, Consultants (Better World Advertising), STD Director, Office of AIDS Prevention Chief, DIS Chief, Clients/patients and community members. • Rationale for selecting the program activity: STDs increase the risk of acquisition and transmission of HIV, but adherence to the STD screening guidelines for MSM is inconsistent among clinical providers. By creating and distributing a toolkit with relevant reference materials (e.g., risk assessment guidelines, screening guidelines) to clinical providers of HIV+ MSM, there should be an increase in awareness of the need for and ultimately, the practice of increased routine STD screening. • Purpose of the evaluation: To evaluate the implementation and effectiveness of the MSM Toolkit. List stakeholders (agency) involved in the evaluation. List the rationale for the STD program activity to be evaluated. List the purpose of the evaluation. List the program goal(s) and objectives to be addressed through the evaluation. (Note: those objectives with “*” will be addressed in this evaluation.) GOAL: To reduce the prevalence of STDs among HIV+ MSM in California. Process objectives*: • By June, 2006, project staff will have developed provider reference materials on STD screening recommendations and sexual health of HIV+ MSM. • By September, 2006, project staff will distribute the MSM Toolkit to a sample of clinical providers caring for HIV+ MSM in California to pilot the intervention (approximately 50­60 across 4 local health jurisdictions). • By December, 2006, project staff will revise the MSM Toolkit based on feedback from the providers who participated in the pilot. continued EVALUATION COMPONENT ACTIVITY Short­term outcome objectives*: • By November, 2006, the clinical providers given a Toolkit will report in a post­toolkit assessment questionnaire as having increased awareness about the need for STD screening among HIV+ MSM from Y% to Z%. • By November, 2006, the clinical providers given a Toolkit will report in a post­toolkit assessment questionnaire as having increased awareness of the health needs of MSM form Y% to Z%. Intermediate outcome objectives: • By (month/year), there will be an increase in routine screening for STDs in HIV+ MSM among clinical providers given the pilot version of the Toolkit from Y% to Z%. Long­term outcome objectives: • By (month/year), the prevalence of STDs among HIV+ MSM will decrease from Y% to Z%. Attach logic model. See Attachment A. List individuals and roles on the evaluation team. List the users and uses of the evaluation findings. List the approach to disseminating the evaluation findings to appropriate users. • Program evaluator: Oversees and leads all evaluation activities. • Project manager: Oversees all project activities; develops timeline; determines clinic sites and jurisdictions; supervises data collection and handling; reviews all components of evaluation and final report; disseminates findings. • Project assistant: Develops toolkit instruments in consultation with project manager and other program staff; assists program evaluator with evaluation activities; maintains files of completed evaluation tools; conducts data entry of evaluation data; analyzes data. • STD program staff: Assists with project and evaluation activities as needed. • CDC/AED: Provides guidance and assistance throughout the evaluation process. • Implementers (Program manager, STD project staff, MSM health service providers and clinic staff, DIS, CDC/AED): Determine the effectiveness of the Toolkit in changing awareness and screening practices of clinical providers. Use evaluation findings to improve the Toolkit and how it is distributed; plan future activities; allocate resources; and increase the capacity of the advisory board to promote toolkit. CDC/AED will use the evaluation to measure the effectiveness and refine the evaluation tools, and publish results • Decision makers (STD chief and program manager, Office of AIDS prevention chief): Ensure that HIV+ MSM are receiving appropriate STD services and that clinical providers are adhering to recommended STD screening guidelines. Use evaluation findings to plan future activities; allocate future funding; and inform program and policy changes. • Partners (Medical advisory board): Improve clinical practice and decisions. • Funders/: Presentation and/or written report (including an executive summary) • Other STD staff and programs: Presentation and/or report. • MSM health service providers and clinical staff: Report and/or presentation. • Advocacy group: Report and/or presentation. • Scientific community/CDC: Manuscript publication. Attach the timeline for completing the evaluation. See Attachment B. Attach the evaluation budget. N/A EVALUATION PLAN MATRIX continued ATTACHMENT A LOGIC MODEL ­STD TOOLKIT Goal: To reduce prevalence of STDs among HIV+ MSM in California ATTACHMENT B TIMELINE FOR STD TOOLKIT IDAHO DEPARTMENT OF HEALTH AND WELFARE, STD/AIDS PROGRAM Annabeth Elliott, RN Evaluation Plan for Syphylis Reporting EVALUATION PLAN NARRATIVE Activity to be evaluated is the syphilis reporting system and the interviews conducted to elicit partners from reported syphilis cases Stakeholders involved in the evaluation include the Idaho Dept. of Health and Welfare STD/AIDS Program, Idaho Dept. of Health and Welfare Office of Epidemiology and Food Protection OEFP, CDC. Rationale: By determining the barriers and facilitators to effective interviewing and reporting of syphilis, stakeholders will make appropriate decisions to improve the system Purpose of the evaluation is to evaluate the existing system of reporting and interviewing for fidelity and timliness and to determine the barriers and facilitators of prompt, complete and accurate submittal of syphilis reports to the CDC. Program goals and objectives: Goal: Improve the quality of Syphilis interviewing and reporting • Objective #1. By 12/31/2005, at least 90% of syphilis cases will be confidentially interviewed by district Epidemiologist to thoroughly elicit partners within 30 days. (CDC PM: Proportion of P & S syphilis cases interviewed within 7, 14, and 30 calendar days from date of specimen collection. Number of associates and suspects tested, per case of P & S syphilis. Number of associates and suspects treated for newly diagnosed syphilis, per case of P & S syphilis) • Objective #2. By 12/31/2005, at least 90% of syphilis cases will be brought to treatment by district Epidemiologist within 30 days (CDC PM: Number of contacts prophylactically treated or newly diagnosed and treated within 7, 14, 30 calendar days from day of interview of index case, per case of P & S syphilis.) • Objective #3 By 12/31/2005, on at least 90% of charts district epidemiologist will document all communication, education, treatment and case management of high­priority STD according to the OEFP contract guidelines. • Objective #4. By 12/31/2005, every month district Epidemiologist will access current data on local and statewide syphilis epidemiology provided by the OEFP • Objective #5. By 12/31/2005, at least 90% of syphilis cases will receive confidential risk reduction counseling within 30 days. • Objective #6 By 12/31/2005 improve the proportion of reported cases of P & S syphilis, EL syphilis and congenital syphilis sent to the CDC via NETSS that has data for age, race, sex, county, and date of specimen collection to 60% Logic Model: See Attachment C. Individuals and roles in evaluation team • Annabeth Elliott, STD Program Specialist – conduct evaluation at Idaho STD Program • CDC Evaluation Team – Yamir Salabarría­Peña – Evaluation Project Team lead, oversite of evaluation activities, TA, site visit to assist with data collection. – Richard Sawyer Provide TA – some of which included drafting evaluation questions, indicators, data sources, data collection methods. Will assist with data collection during site visit. – Stacey Little – coordinate TA, will assist with data collection during site visit. • District Epi staff – answer evaluation questions & provide feedback • OEFP staff– answer evaluation questions & provide feedback Users of the evaluation findings • District Health Dept Directors – Decision makers ­written report with verbal follow­up • Other STD Programs – Partners ­Panel at Nat’l STD conference, discuss briefly at IPP Conference and Thursday report. • Epi staff and managers – Implementers ­written report and possibly present at Epi conference. Use evaluation findings to improve syphilis interviewing and reporting • CDC – funders ­written report Timeline: See Attachment D Budget: See Attachment E IDAHO DEPARTMENT OF HEALTH AND WELFARE, STD/AIDS PROGRAM LOGIC MODEL FOR SYPHILLIS REPORTING SYSTEM Goal: Improve the quality of syphillis interviewing and reporting ATTACHMENT D TIMELINE ATTACHMENT E BUDGET FORSYTH COUNTY’S SYPHILIS ELIMINATION PROJECT (NORTH CAROLINA) Monica Brown, MPH; Kawanna Glenn, BS; Monica Melvin, BS; Chantha Prak, BS; Lumbe Davis, MPH Evaluation Plan for Outreach Component of Forsyth County’s Syphilis Elimination Project EVALUATION PLAN NARRATIVE STD Component Program to be evaluated: Community outreach efforts of the Forsyth County Health Department’s Syphilis Elimination Project. Stakeholders involved: SEP and NTS program staff, DIS, STD Director, STD Coordinator, POSSE Task Force, STD Clinical Staff, Step One Substance Abuse Services (CBO), health commissioners, CDC evaluation team, AED technical assistant team, and service providers. Rationale for selecting the program activity: Syphilis outreach efforts are monumental in building rapport with the community in order to educate, increase awareness, and facilitate behavior change. Consistent and effective outreach should enhance risk reduction behaviors, increase risk perception, and lead to more screenings within the community. Purpose of the Evaluation: To evaluate effectiveness of current outreach in high­risk areas of Forsyth County. Goals and Objectives to be addressed: Goal: To reduce the incidence of syphilis among high­risk, male and female racial/ethnic minorities 18­45 years of age in Forsyth County, NC Process Objectives: 1. Between October 2005 and June 2006, program staff will provide an average of 35 of health education contacts/communications per month to males and females of the target population. 2. Between October 2005 and June 2006, program staff will distribute an average of 35 safe sex kits (containing brochures, condoms, and testing information) per months to males and females of the target population. 3. Between October 5 and June 2006, program staff will implement an average of 20 community outreach events in target zip codes 27101, 27105, and 27107 Logic Model: See Attachment F. Individuals and Roles on the Evaluation Team: • Health Promotions Director: Oversees all health promotion activities • STD/HIV Director: Supervises SEP and NTS program staff and oversees all NTS and SEP activities. Collects data, maintains records, analyzes monthly reports, and assists with outreach efforts upon request. • NTS and SEP Program Staff: Provide education, counseling and screenings. Conduct community outreach, collect data, develop evaluation tool, determine outreach venues based on statistics and DIS reports, • AED and CDC: Provides assistants in developing evaluation plans, goals and objectives and will help with data collection and analysis. • DIS: Provides assistance in determining outreach locations. Provides follow­up interviews for syphilis positive cases. Possibly collects outreach data. Users and Uses of Evaluation Findings: • Implementers (STD/HIV Director, SEP and NTS program staff): Determine the effectiveness of community outreach efforts in target zip codes and population. Based on findings, changes will be implemented to improve outreach efforts and to provide information to stakeholders on appropriate methods to access target populations. • Decision Makers (NC SEP Program, Health Promotions Director, STD/HIV Director, SEP and NTS program staff, Funders, Health Commissioners): Provide support in changes made to outreach efforts; change outreach to ensure its effectiveness. • Partners (POSSE Task Force, Step One Substance Abuse Services, Service Providers, STD Clinical Staff, and DIS: Provide ideas for modifications to outreach efforts. Approach to Disseminate the Evaluation Findings: • Written Report: To be used by funders, Forsyth County Health Department, CDC and NC SEP Program. • Presentations at conferences and local meetings: POSSE task force, CBOs, NC SEP programs, county health departments, faith­based organizations, service providers, community members and Forsyth County health department staff. ATTACHMENT F LOGIC MODEL OUTREACH COMPONENT OF FORSYTH COUNTY’S SYPHILIS ELIMINATION PROJECT Goal: To reduce the incidence of syphilis among high­risk African Americans male and female, 18­45 years of age in Forsyth County. MICHIGAN DEPARTMENT OF COMMUNITY HEALTH, STD PROGRAM Kristine Judd, BSPH and Bruce Nowak, BS Evaluation Plan for Ct Activity in an Emergency Department EVALUATION PLAN NARRATIVE Program Activity: Determine if there is an existing protocol for Chlamydia screening in the Emergency Department of St. John Hospital and evaluate the adherence to or barriers to the protocol. Implement universal screening of 15 – 24 year old females to determine level of infection that was previously missed. Stakeholders: • Mark A. Miller, STD Director • Kristine Judd, STD Administrative Program Manager • Bruce Nowak, STD Surveillance Supervisor • Yamir Salabarría­Peña, Dr.P.H., MPH, Health Scientist/ Evaluation Specialist, CDC • Richard Sawyer, Ph.D., Senior Program and Evaluation Manager, AED • Susan Rogers, Ph.D., Senior Research and Evaluation Advisor, AED • Karen Lighheart and Alana Thomas, STD DIS, Surveillance • Detroit Health Department STD DIS • James Rudrik, Ph.D., Microbiology Section Manager, MDCH Bureau of Laboratories • Dr. Southall, Director, Emergency Department, St. John Hospital • Dr. Charlene Irvin, Research Director, St. John Hospital • Medical Students/Residents, St. John Hospital Rationale for selecting program activity: In Michigan, Chlamydia prevalence is highest among those ages 15 – 19 and 20 – 24 with rates of 1906 and 2406, respectively, per 100,000 population in 2004. Additionally, screening conducted at adolescent venues (school­based clinics, juvenile detention facilities, and teen health centers) show high positivity, up to 24% in females and 21% in males. Among school­based clinics studied, 49% of the students that tested positive for Chlamydia accessed service for reasons other than STD check. Purpose of the Evaluation: This evaluation will examine the implementation of a revised Ct screening protocol at facility, and, for a 6 month period, offer universal screening to females ages 15 – 24 accessing service in St. John Hospital Emergency Department. This facility was chosen as it is located in SE Michigan, a high morbidity area. Results will be analyzed to determine how many cases of Chlamydia would have gone undetected had traditional screening protocol been followed. Program goal and objectives to be addressed through the Evaluation: Goal: By November 1, 2006, St. John ED will fully adopt protocol to universally screen all 15 – 24 year old females for Chlamydia. Process Objectives: • By January 10, 2006, Michigan STD (MSTD) will identify an emergency department for pilot evaluation of chlamydia screening protocol. • By January 13, 2006, MSTD will establish a partnership with the emergency department at St. John’s Hospital in Detroit. • By January 13, 2006, MSTD will meet with CDC/AED to establish evaluation timeline. • By January 13, 2006, MSTD and CDC/AED will delineate roles and responsibilities for evaluation. • By February 15, 2006, MSTD will meet with site to discuss evaluation process, delineate roles and responsibilities, and gather existing policies/procedures on CT screening. • By May 15, 2006, MSTD will develop data collection instrument to be used by Emergency Department (ED). • By June 1, 2006, MSTD will provide ED with all materials (laboratory) and training on procedure to collect and submit specimens to MDCH laboratory. • By July 1, 2006, MSTD will conduct a site visit and chart review to assess adherence to revised protocol. • By November 1, 2006, MSTD will finalize data collection and forward to CDC and AED. Short­term Outcome Objectives: • By May 1, 2006, St. John’s ED will accept revised protocol to universally screen all 15 – 24 year old females for chlamydia. • By June 1, 2006, 60% of ED staff at St. John’s will increase awareness of chlamydia prevalence among target population, as measured by post­training evaluation. Intermediate Outcome Objectives: • By May 15, 2006, St. John’s ED, as part of the protocol will submit specimens to MDCH regional laboratory in Detroit. • By September 20, 2006, St. John’s will achieve 80% adherence to revised screening protocol. Long­term Outcome Objectives: • By November 1, 2006, St. John’s ED will fully adopt protocol to universally screen females ages 15 – 24 for chlamydia. Logic Model: See Attachment G. List the users and uses of the evaluation findings: The Michigan STD Program will use the evaluation findings to inform resource allocation for future Chlamydia screening as well as advocate for increased screening in other venues. St. John ED will use the results of this evaluation to determine if they will make a permanent adjustment to their Chlamydia screening criteria. CDC/AED ­Measure the effectiveness of the evaluation tools and refine them accordingly. Publish results related with the pilot­testing process and the actual evaluation. List the approach to disseminating the evaluation findings to appropriate users: CDC/AED: written report, publications St. John: written report and oral presentation MDCH/STD: written report Michigan IPP: written report and oral presentation Timeline for completing evaluation: Build partnerships with ED: Ongoing Define/Delineate Roles and Resp. for Evaluation: March, 2006 Provide/deliver resources: May – Oct, 2006 Provide TA to ED on purpose of evaluation: Ongoing Develop data collection instrument for use by ED: May, 2006 Conduct regular meetings with ED: Monthly Evaluate current ED protocol: February, 2006 ATTACHMENT G MICHIGAN DEPARTMENT OF COMMUNITY HEALTH, STD PROGRAM Goal: Increase Chlamydia screening in females 15–24 years old at an emergency department (ED) in Detroit.