Skip directly to search Skip directly to A to Z list Skip directly to navigation Skip directly to site content Skip directly to page options
CDC Home

A Guide to Developing a TB Program Evaluation Plan

This is an archived document. The links and content are no longer being updated.

Text for PowerPoint Slides

November 14, 2005
2:00 p.m. CST

Slide 0: Introduction to Program Evaluation

Anne Powers, PhD
Battelle Centers for Public Health
Research and Evaluation

Speaker Notes:

Welcome to an Introduction to Program Evaluation. Thank you for joining us.

The course that I’m kicking off today is divided into four sessions, each lasting between one and two hours. They’re designed to give you an overview of program evaluation. This course is designed for people new to the field of program evaluation and is based on CDC’s Framework for Program Evaluation.

First half of the session will be lecture, followed by time for questions.

I’m here to today to present the first session, the second session will be this Thursday and additional sessions will be presented over the course of the next couple of months.

Other logistics: The webinar format limits how much interaction we can have during the presentation. While I’m talking, we can’t hear anyone on the phone. You can type in questions that the entire group will see. If anyone wants to do that, I’ll stop periodically and answer those questions. After I finish the presentation, everyone’s microphone will be on and we can hear the folks on the phone and that will be the opportunity to answer questions.

Return to Slide Text Table of Contents

Slide 1: What You Will Learn From These Sessions

  • Session 1
    • Become familiar with the 6 steps of the CDC Evaluation Framework
    • Learn to identify and engage important stakeholders (Step 1)
  • Session 2
    • Develop a simple logic model (Step 2)
    • Learn to focus the evaluation (Step 3)
  • Session 3
    • Learn to select appropriate data collection methods for your evaluation questions (Step 4)
    • Learn to analyze and interpret data and findings (Step 5)
  • Session 4
    • Understand how evaluation findings can be used (Step 6)

Speaker Notes:

Through the 4 sessions you will learn the fundamental concepts used in program evaluation.

You will become familiar with the 6 steps of the CDC Evaluation Framework. Although there are other frameworks and models for evaluation, we have selected to use this framework because it is well researched, commonly used, flexible, and can be used for any type of health or human service program. It is also relatively straight forward and easy to learn. Finally, CDC has developed a number of products using the framework so there are many additional resources to help when using the framework.

CDC has helped to coordinate a TB Evaluation Working Group that includes CDC staff, state program staff, evaluation experts and others are working on developing a number of products specifically for TB program staff to help them with their evaluation. This series of web meetings is one of those tools but others are on their way.

In the first sessions(today), we will explore the steps of the framework, and learn to identify and engage stakeholders (step 1).

In session 2, we will discuss step 2 and 3 of the framework in detail and develop a simple logic model. That session will also explain the importance of logic models and how to use them. This will occur on Thursday at 1 pm. Tom Chapel will present this.

The last two sessions are tentative at the moment. Initially, our original plan was to cover steps 4, and 5 in session 3. where we will learn to identify evaluation questions and their appropriate indicators, select appropriate data collection methods, and analyze data and interpret findings. And in the 4th session talk about how evaluation findings can be used. However, we are hearing that maybe we need to expand this and offer other topics so if you have any ideas or suggestions, we welcome those. So if you have an idea or suggestion, let Maureen Wilce know.

Throughout the sessions, we’ll provide an overview of the CDC Framework but also apply the lessons to TB program examples.

Return to Slide Text Table of Contents

Slide 2: Session 1

  • Become familiar with the 6 steps of the CDC Evaluation Framework
  • Learn to identify and engage stakeholders (Step 1)

Speaker Notes:

In session 1, we will go over the overall CDC Evaluation Framework, introduce the six steps and talk about the rationale for the Framework. Then we’ll learn about Step 1 – engaging stakeholders.

Return to Slide Text Table of Contents

Slide 3: Session 1

You will learn…

  • What program evaluation is
  • Why evaluation is important
  • The steps in planning and conducting evaluations as outlined in the CDC Framework for Program Evaluation
  • What standards exist for program evaluation
  • How to identify and engage stakeholders (Step 1 of the Framework)

Speaker Notes:

There are five learning objectives for today’s sessions.

You will learn about…

  • The basics of what program evaluation is and what it is not, including the differences between evaluation and research and the differences between evaluation and surveillance, and evaluation and monitoring
  • Why evaluation is important and what can be evaluated and what it means to do utilization-focused evaluation
  • The steps in planning and conducting evaluation as outlined in CDC Framework for Program Evaluation
  • The standards for conducting program evaluation that are also part of the framework, and how the standards are important and useful in conducting evaluations
  • How to identify and engage stakeholders – that’s Step 1 of the framework

And after the instruction, we will have an interactive exercise to illustrate how evaluators identify and engage stakeholders. This exercise will illustrate the importance of this step.

Return to Slide Text Table of Contents

Slide 4: What is Evaluation?

“ the systematic investigation of the merit, worth, or significance of an ‘object’ ” Michael Scriven

“…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy…”
Weiss Carol

“ A systematic way to determine the “value” of a program, program components, or activity.”
Unknown XXX

Speaker Notes:

What is evaluation?

Here are three perspectives.

According to Michael Scriven, evaluation is “the systematic investigation of the merit, worth, or significance of an ‘object’.”
Weiss Carol says, it is “the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards as a means of contributing to the improvement of the program or policy.”

Another unknown source said that evaluation is “a systematic way to determine the “value” of a program, program components or activity.”
Note evaluation, as indicated, is “systematic” - which means planned, disciplined and objective. Writing evaluation plans is part of the systematizing of evaluation. Another important word is “value” and, unlike research, evaluation cannot be done without judgments of value. Evaluation can mean different things in different situations and as a discipline, there is a lot of flexibility.

The broad definition of evaluation that we’re using and that you’ll find in CDCs Introduction to Program Evaluation for Public Health Programs “The systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development.”

Return to Slide Text Table of Contents

Slide 5: Research vs. Evaluation

Research > Systematic Methods > Evaluation

Research

  • Production of generalizable knowledge
  • Researcher-derived questions
  • Paradigm stance
  • More controlled setting
  • Clearer role
  • Published
  • Clearer allegiance

Evaluation

  • Knowledge intended for use
  • Program- or funder-derived questions
  • Judgmental quality
  • Action setting
  • Role conflicts
  • Often not published
  • Multiple allegiances

Speaker Notes:

Before we talk about what evaluation is, let’s review what evaluation isn’t. For example, research…

Both evaluation and research employ methods that are systematic in their investigations. However, there are fundamental differences between the two.
While research generally is conducted to produce knowledge that is generalizable across different programs, evaluations are conducted to generate findings that are intended for use by the specific programs under which evaluations are conducted.

In research, the question for investigation are researcher derived. Questions for evaluation, on the other hand, are derived from the program itself or its stakeholders
Evaluation is “value” based, thus it is judgmental in quality. Unlike evaluation. research is usually conducted without judgments and biases. For this reason, research is normally done under a more controlled setting to prevent biases. Since evaluation findings are to be used for program improvements, it is conducted in a real-world setting and takes into the account of the environment and the context. In research, the role of a researcher is more defined and clear. Evaluators, on the other hand, may often have role conflicts as they may be a part of the program in which the evaluation is being conducted.

Because research and evaluation are conducted for very different reasons – research seeks to add to generalizable knowledge whereas evaluation seeks to generate information useful for program improvements for a specific program, research findings are usually published, whereas evaluation findings are often not. For the same reasons, research had a clearer allegiance, and evaluation has multiple allegiances as it can impact more people and the results are utilized by more people. For example, TB research findings can be used to develop initiatives that are put in place and implemented in a real world setting. The initiatives can be evaluated using program evaluation that takes into consideration the setting and context. If the evaluation finds that the initiative is not successful, it may be that the research was fine, but did not work in the program’s context or the initiative was not implemented properly.

Return to Slide Text Table of Contents

Slide 6: “Research seeks to prove, evaluation seeks to improve…”

M.Q. Patton

Speaker Notes:

To put it simply, although research and evaluation are very similar and both use systematic methods, their intent and purposes differ. As Patton says, “Research seeks to prove, evaluation seeks to improve…” Evaluation involves looking closely at the operations of programs or program initiative, and with the understanding gained from this examination, make recommendations for improving the program. It’s important to always keep this difference in mind, and remember that your intention for doing evaluation is to help improve programs and not to blame or criticism or seek to eliminate something.

Return to Slide Text Table of Contents

Slide 7: If the Goal of Evaluation is…

… to improve a program

Then no evaluation is good unless findings are used to make a difference

Speaker Notes:

The goal of evaluation is to improve program, thus no evaluation is good unless the results are used.

Often times, when evaluation is completed, reports and papers are written reporting the findings. However, these reports often stay on the bookshelves. If this is case, then the findings are not used to make changes that can really improve the delivery and the outcome of the program.

It’s important to always keep in mind how the evaluation can be and will be used once completed. And always plan and strategize ways that would encourage and facilitate the use of the evaluation findings. When I talk about the CDC framework, you’ll see that one of the most important parts of it is the dissemination and utilization of the evaluation results.

Return to Slide Text Table of Contents

Slide 8: Surveillance & Monitoring vs. Program Evaluation

  • Surveillance - tracks disease or risk behaviors
  • Monitoring - tracks changes in program outcomes over time
  • Evaluation - seeks to understand specifically why these changes occur

Speaker Notes:

Surveillance and monitoring are terms frequently used in public health and they are often confused with evaluation.

The term, surveillance, typically refers to the tracking of disease or risk behaviors. TIMS (TB Information Management System) is an example.
Evaluation is also more specific than surveillance – it is designed to look at one program during a specific point in time.

Monitoring, on the other hand, is used for tracking changes in program outcomes over time.

Both are used for performance measurement and both may also be used in program evaluation. However, evaluation stands apart from both surveillance and monitoring in that it seeks to understanding why these changes occur.

Return to Slide Text Table of Contents

Slide 9: What Can be Evaluated?

  • Direct service interventions
  • Community mobilization efforts
  • Research initiatives
  • Surveillance systems
  • Policy development activities
  • Outbreak investigations
  • Laboratory diagnostics
  • Communication campaigns
  • Infrastructure-building projects
  • Training and educational services
  • Administrative systems

– MMWR, 1999 Framework for Program Evaluation in Public Health

Speaker Notes:

Any program or program initiative can be evaluated and most programs or program activities can benefit from evaluation.

The various types of public health related activities or initiatives that can be evaluate are listed here. The range of activities includes services and interventions to that of organizational structures and it’s important to how wide the range really is.

The range of activities is important for TB programs. Many of the activities listed here are undertaken by the programs and may be prime candidates for the evaluation you’re planning right now. Note that many of these items are not full programs but initiatives meant to address specific areas where a program may fall short. Most likely this will be the case with your program. You’ll be identifying an initiative or part of your program to evaluation, range than the entire program.

Return to Slide Text Table of Contents

Slide 10: When to Conduct Evaluation?

Conception: Planning a New program > Assessing a Developing program > Assessing a Stable, Mature program > Assessing a program after it has ended, Completion.

The stage of program development influences the reason for program evaluation.

Speaker Notes:

We now know the purpose of the evaluation and ways evaluation can be used. You are probably wondering when evaluation should be conducted…

Traditionally, most evaluations occurred at the END of a program when it’s already too late to make any changes.To be most effective, beneficial and instrumental to the program, evaluation can be done at all phases of the life of a program. Evaluation can be conducted at any stage of program development, and the stage of development influences the reasons for evaluation and the type of information gathered.

In the conception phase of a NEW program, evaluation can be used to assess motivation for starting a new program, define the need for the program, and inform decision makers about what a new program should look like: its objectives and expectations. It can help clarify important factors that are essential in laying down the foundation for a program.
In a DEVELOPING program, evaluation can be used to monitor program progress, and re-define program activities and expectations. It can identify potential gaps or problems early, so steps can be taken to resolve them.

In a STABLE and MATURE program, evaluation findings can be used to monitor the program and provide feedback to management. Evaluation can help show program successes as well as areas for improvement. Outcomes can also be assessed for mature programs because enough time has passed to allow this.

At the END of a program, evaluation helps assess the value of the program as well as document lessons learned for the future. In this phase, evaluation assesses what intervention was transferred and adopted, and if the change was sustained. Unexpected outcomes can also be assessed. At this time, you might also evaluate the possibility that an intervention could be replicated in another setting and what factors would encourage this.

For TB programs, most are likely to be stable and mature. Yet an initiative that you’re working on could be new or developing if, for example, you’ve elected to target a new population with an emerging TB problem or you’re trying to promote a treatment protocol among health care providers.

Return to Slide Text Table of Contents

Slide 11: Why Evaluate Programs?

  • To gain insight about a program and its operations – to see where we are going and where we are coming from, and to find out what works and what doesn’t
  • To improve practice – to modify or adapt practice to enhance the success of activities
  • To assess effects – to see how well we are meeting objectives and goals, how the program benefits the community, and to provide evidence of effectiveness
  • To build capacity - increase funding, enhance skills, strengthen accountability

Speaker Notes:

As I mentioned, why you evaluate a program will change over the life of the programs. Among the many reasons to evaluate, there are 4 basic reasons. “

<TO GAIN INSIGHT>
We evaluate a program to judge its merit or gain insight about the program and its operations. This might include providing information concerning the practicality of a new approach or developing program; and to find out where the program is and where it is heading. Evaluation helps us assess where we are in program development and helps identify information we need to plan for our next steps. It provides us with information for better decision making, and allows us to manage resources and services more effectively. For example, an evaluation of testing and treatment for LTBI might reveal how many people in high-risk and low-risk groups are being tested. This information could inform decisions about reducing testing in low-risk populations to save money.

<TO IMPROVE PRACTICE>
For this reason, we also evaluate program for the purpose of making improvement or changing practices. We can use evaluation to improve or enhance what we do, which is appropriate in the implementation stage when established an TB program seeks to describe what it has done. It tells us what we are lacking and where we need to focus our efforts. Evaluation provides us with a means to understand why we achieved our successes, or why we did not meet our objectives. It provides us with information we need to strategize, plan and implement initiatives that enhance the effectiveness of our TB control program. It is important to note that evaluation requires that we examines factors objectively, both inside and outside of our program, for the causes of our performance. Understanding these factors allows us to make better decisions, implement change where appropriate, and improve upon what we have accomplished.

<TO ASSESS EFFECTS>
We also evaluate program to assess its effects. We conduct evaluation to see how well objectives and goals are being met. Evaluation findings can show how effective the intervention was at inducing the intended changes or effects. It helps us demonstrate the value of our efforts. Evaluation documents what we do and systematically shows the contributions towards accomplishing our goals. This information can help decision makers at all levels understand the benefits and consequences of what they are doing. At crucial times, findings from evaluation serve to help us advocate for the cause and leverage support.

<TO BUILD CAPACITY>
Lastly, we use program evaluation as a method for building program capacity. The process is effective for programs to make self-directed changes, such as to increase funding, develop skills, build infrastructure and/or strengthen accountability. For example, evaluation of contact investigations in a community of recent immigrants might reveal a need to translate informational materials into a new language to enhance the program’s capacity to serve this emergent group. Evaluation also builds on itself – as we learn and gain experiences in conducting evaluations, we also build evaluation capacity for our program, and increase program capacity for self-directed improvements. Finally, evaluation can strengthen accountability by allowing us to demonstrate that we are responsible stewards of the program’s funding and resources.

Return to Slide Text Table of Contents

Slide 12: CDC Framework for Program Evaluation

Steps: Engage stakeholders> Describe the program > Focus the evaluation design> Gather credible evidence > justify conclusions > Ensure use and share lessons learned. Standards: Utility, Feasibility, Propriety, Accuracy.

Speaker Notes:

This picture summarizes the essential elements of program evaluation presented in CDC framework for program evaluation in public health. This framework is intended as a practical, non-prescriptive tool.

The 6 steps of program evaluation are shown in a circle representing the circular nature of the evaluation process. As depicted, the steps are inter-dependent, and they may be conducted in a non-linear order. Often more than one step is ongoing at any time. The Framework provides the flexibility to go backwards or forwards in the process depending upon the need.

Although the steps can be conducted in a different order or simultaneously, there is a reason for the way the steps have been sequenced – the earlier steps provide the foundation for the subsequent. It is important to note that each one of these steps plays an role and is crucial to the process.

The circle also represents that the process for program evaluation is iterative. As program evaluation is conducted, the information or the finding from the evaluation is used to feedback into the program to make improvements. The process repeats itself in a cycle as a mean to improve the program and its effectiveness in achieving its objectives.

At the center or core of the circle are the evaluation standards. These four standards are intended to maximize the quality and the effectiveness of evaluation efforts. They are the criteria and standards for which the quality of your program evaluation efforts can be judged. They are applicable to each of the six steps in the evaluation framework. We will learn more about these standards later.

Return to Slide Text Table of Contents

Slide 13: Steps in Program Evaluation

  • Step 1: Engage Stakeholders
  • Step 2: Describe the Program
  • Step 3: Focus the Evaluation Design
  • Step 4: Gather Credible Evidence
  • Step 5: Justify Conclusion
  • Step 6: Ensure Use and Share Lessons Learned

Speaker Notes:

Let’s walk through the 6 steps of program evaluation framework.

Step 1: Engage stakeholders

Stakeholders are those with vested interests in the program and are potentially affected by the program and its evaluations. Stakeholders can be both supporters and skeptics of the program. Engaging stakeholders in the evaluation process ensures that the evaluation will be useful. When evaluation meets the need of its intended users, the chance of the findings being used to make program improvements is greatly enhanced.

Step 2: Describe the program

This step of the evaluation framework allow us to gain a better understanding of the program and its activities. The need, context, and the target population that the program serves are defined. Major program components (i.e., resources, activities, outputs and outcomes) and their relationships with each other are also part of this step.

Step 3: Focus the evaluation design

Step 3 of the evaluation process is crucial as it helps us determines a focus for the whole evaluation. When planning an evaluation, many worthwhile questions are identified. It’s easy to want to evaluate everything and collect all the information available. However, we are bound by limited resources. Thus, it’s important to prioritize and focus the effort on areas where changes can be most valuable.

Step 4: Gather credible evidence

After determining the key questions to examine in the evaluation, the next step is to gather evidence. Prior to gathering any data, indicators of program performance are identified as are benchmarks or targets that we can use to compare the indicators to. Once you know what indicators to look for, data sources and data collection methods can then be determined.

Step 5: Justify conclusion

After all the data have been collected, the next step is data analysis and interpretation. The interpretation part of this step is critical. In this phase you judge your findings against the established program benchmarks. It’s important to involve stakeholders in this step and consider the context in which the program is operating so that your conclusions are sound, reasonable and objective. Once conclusions have been reached, recommendations can then be make if appropriate.

Step 6: Ensure use and share lessons learned

This step of the framework emphasizes the importance of using findings for program improvement once the evaluation has been completed. Dissemination of the evaluation findings and sharing lessons learned ensures that all the efforts invested into the evaluation are garnered and will be used.

Return to Slide Text Table of Contents

Slide 14: Underlying Logic of 6 Steps

  • No evaluation is good unless… findings are used to make a difference
  • No findings are used unless… a market has been created prior to creating the product
  • No market is created unless… the evaluation is well-focused, including most relevant and useful questions

Speaker Notes:

Illustrating the framework

The purpose of this slide is to reinforce the circular nature of the Framework and illustrate how each of the step are interdependent.

One of the basic components of the Framework is using the results. That’s part of step 6 but it’s important to create a market for your evaluation findings. That’s why stakeholders are engaged in step 1.

Finally, part of creating a market and being responsive to it is focusing your evaluation. This is a difficult but important step involving identifying the most important things to evaluate and developing relevant and useful questions.

Return to Slide Text Table of Contents

Slide 15: Establishing the Best Focus Means

  • Framework Step 1: Identifying who cares about our program besides us? Do they define the program and program “success” as we do?
  • Framework Step 2: What are milestones and markers on the roadmap to the main public health outcomes?

Speaker Notes:

Although focusing the evaluation is Step 3 of the Framework, Steps 1 and 2 provide information that will help with the focus. This slide is also meant to illustrate how the steps in the Framework work together to result in a useful evaluation.

For example, engaging stakeholders in Step 1 will help you to focus the evaluation by asking relevant questions of them. Do the stakeholders define the program and program success as we do? Further, what do they want to know about the program? What do they perceive to be program problems or weaknesses?

Describing your program – Step 2 of the evaluation – also helps to focus the evaluation by establishing the milestones and markers that lead to the public health outcomes your interested in changes or creating. Tom will talk more about step 2 on Thursday.

Return to Slide Text Table of Contents

Slide 16: Standards for Effective Evaluation

Steps: Engage stakeholders> Describe the program > Focus the evaluation design> Gather credible evidence > justify conclusions > Ensure use and share lessons learned. Standards: Utility, Feasibility, Propriety, Accuracy.

Speaker Notes:

Good evaluations are diligently conducted under the guidance of the standards for program evaluation provided by the Joint Committee of Standards for Educational Evaluation.

There are 4 standards for program evaluation: Utility, Feasibility, Propriety, and Accuracy.

These standards serve to guide your decision making process at each step of the Framework to ensure that the evaluation stays focused and balanced.

In the next couple of slides, we will go over each standard in depth.

Return to Slide Text Table of Contents

Slide 17: The Four Standards

  • Utility: Who needs the information and what information do they need?
  • Feasibility: How much money, time, and effort can we put into this?
  • Propriety: What steps need to be taken for the evaluation to be ethical?
  • Accuracy: What design will lead to accurate information?

Speaker Notes:

One of the primary purposes for using the standards is to help design the best evaluation for your program or situation. There is no one “right” evaluation. Instead when designing the evaluation, you make choices that best suit your particular situation so that the result is the optimal program evaluation for your program or initiative.

The four standards – utility, feasibility, propriety, and accuracy – when applied will result in the most worthwhile evaluation for your situation. Under each standard on the slide is a question that can be asked to help design the evaluation. By applying the standards, you have an evaluation that is useful, feasible, proper and accurate for the program evaluation that you are conducting at this time.

Now we’ll talk about each standard individually.

Return to Slide Text Table of Contents

Slide 18: Standard: Utility

Ensures that the information needs of intended users are met.

  • Who needs the evaluation findings?
  • What do the users of the evaluation need?
  • Will the evaluation provide relevant (useful) information in a timely manner?

Speaker Notes:

Utility, the first standard of the 4, helps to ensure that the evaluation serves the information needs of its intended users. As you remember from earlier, evaluation does no good unless it is used to improve programs. Knowing who needs the information and how the information will be used helps us better tailor and focus the evaluation to address users needs. When we take the time to find out what is needed and provide people with what they need to do their work or improve their productivity, we ensure the efforts that we put in and the product that we produce will be used, and not wasted.

So, the important questions to keep in mind while planning and conducting evaluations are:

  • Who needs the evaluation results?
  • What do they need?
  • Will the evaluation provide relevant, or otherwise useful, information in timely manner for the users?

Often the best way to answer these questions is to ask your stakeholders and provide them with the opportunity to be part of the evaluation.

Return to Slide Text Table of Contents

Slide 19: Standard: Feasibility

Ensures that evaluation is realistic, prudent, diplomatic, and frugal.

  • Are the planned evaluation activities realistic given the time, resources, and expertise at hand?

Speaker Notes:

Feasibility is another standard to keep in mind while planning and conducting evaluation.

The purpose of this standard is to ensure that the evaluation conducted or to be conducted is realistic, prudent, diplomatic, and frugal.

Resources and manpower in public health programs are limited. Although many of us would love to do everything, there are just not enough resources to do that.
Thus, prioritizing the needs and planning for a feasible evaluation are very important.

When planning an evaluation, we need to ask ourselves, “Are the activities planned realistic given the time, resources, and expertise available?” It is very tempting to want to evaluate everything or a large number of things, particularly with a program we know well and are invested in. But applying this standard will help us to find focus and channel the resources to the parts or activities of the program that most need improvement.

Return to Slide Text Table of Contents

Slide 20: Standard: Propriety

Ensures the evaluation is conducted legally, ethically, and with due regard for the welfare of those involved and those affected.

  • Does the evaluation protect the rights of individuals and protect the welfare of those involved?
  • Does it engage those most directly affected by the program and by changes in the program, such as participants or the surrounding community?

Speaker Notes:

Propriety ensures that those of us involved in the evaluation, as evaluators or members of the evaluation team, behave legally, ethically, and with due regard for the welfare of those involved and affected by the program and its evaluation. Like doing research, evaluation may involve the handling of personal, sensitive information. More specifically in TB programs, evaluation may involve the review of medical records or charts. And the handling of information that patients and providers may regard as highly sensitive.

Although for the purpose of making program improvements, many states have waived evaluation from having to go through extensive IRB reviews, it is still our responsibility to protect those we serve and respect their rights and privacy.

Couple of questions that help ensure our evaluation is conducted respectfully and according to the propriety standard:

  • Does the evaluation or methods employed in conducting evaluation protect the rights of individuals and the welfare of those involved?
  • Does the evaluation engage those directly affected by the program and as well as those affected by potential changes in the program (i.e., program participants, patients or the surrounding community)?
  • Have we included people in the evaluation process and allow them to have their own voices in making decisions that may affect their health and well being?

Return to Slide Text Table of Contents

Slide 21: Standard: Accuracy

Ensures that the evaluation reveals and conveys technically accurate information.

  • Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results?

Speaker Notes:

Lastly, accuracy is the 4th Program Evaluation standard. It ensures that the evaluation conducted reveals and conveys reliable and valid information.

Having accurate information is the foundation for good decision making and a step toward improving programs.

Thus in planning evaluation it’s important to identify ways or methods that would provide us with the most valid and reliable results. In one of the future sessions when we talk about design and data collection, ways to gather accurate information will be discussed.

Return to Slide Text Table of Contents

Slide 22: Engaging Stakeholders (Step 1)

Speaker Notes:

Now that we have a brief overview of the steps in the evaluation framework and have a better understanding of the standards in which we use to guide our evaluation, let’s explore the first step in depth. As I mentioned earlier, the later steps will be covered in subsequent presentations.

For the rest of this session, we will be discussing Step 1: Engage Stakeholders.
We will learn to identify key stakeholders that are essential to the program and its evaluation, and then figure out ways to include them in the evaluation.

Definition of stakeholder “people and/or organizations that are invested in the program, are interested in the results of the evaluation, and/or have a stake in what will be done with the results of the evaluation.”

Return to Slide Text Table of Contents

Slide 23:

“There are five key variables that are absolutely critical in evaluation use. They are in order of importance: people, people, people, people, and people.” Halcolm

Speaker Notes:

Some of you may be wondering why engage stakeholders?

Why does the framework recommend us to start with this step?

According to Halcolm, “There are 5 key variables that are absolutely critical in evaluation use. They are in order of importance: people, people, people, people, and people.”
So why are people important in evaluation use? There are many reasons.

First, stakeholders are the people who will implement whatever changes or improvements the evaluation recommends. Engaging them early on in the process increases their “buy-in” and thus, increases the chance that the recommendation made as the result of evaluation will be used. Stakeholders provide you with reality check. They keep the evaluation grounded.Stakeholders can also generate needed political pressure or support or even resources for the evaluation.

In addition to these reasons, stakeholders:

  • Know the program, and can provide important insights
  • Guide evaluation and ensure that evaluation is useful
  • Increase credibility of the evaluation
  • Assist with dissemination of findings
  • Can advocate for the program
  • Can provide future funding

Slide 24: Identifying Stakeholders

  • Who are the stakeholders?
    • Persons involved in program operations
    • Persons served or affected by the program
    • Intended users of evaluation findings
  • What is their interest in the program?
    • Do they support the program?
    • Are they skeptical about or antagonistic toward the program?

Speaker Notes:

So who are the stakeholders?

Stakeholders are persons interested in the program or are affected by the program. Generally we look at stakeholders as being from three categories: those involved in program operations, those served or affected by the program, and the intended users of evaluation findings.

Once we know who stakeholders are, we need to identify their interest and perspective on the program. For example,

Do they support the program? Or are they skeptical about or antagonist toward the program?

One way to do identify stakeholders systematically is to create a list with the three categories, then under each category, list out the names of the persons who fall into it. Note that some stakeholders will fall into more than one category.

Once you’ve generated the list, then along side each person’s name, list out their interest in the program and/or the evaluation. Note that this is just the beginning of the exercise. It’s important to seek feedback on this list from others and from the stakeholders themselves. The stakeholders are the best people to tell you what their interests are.

Return to Slide Text Table of Contents

Slide 25: Identifying Stakeholders

  • Persons Involved in Program Operations
    • Staff and Partners
  • Persons affected or served by the program
    • Clients, their families and social networks, providers and community groups
  • Intended users of the evaluation findings
    • Policy makers, managers, administrators, advocates, funders, and others

Be sure to include supporters and skeptics!

Speakers Notes:

As we just discussed, there are 3 basic categories of stakeholders:

  • those involved in program operations
  • those served or affected by the program, and
  • the intended users of the evaluation

Staff, managers, and other partners who are directly involved in day-to-day activities of getting tasks done are those involved in program operations. This includes public health nurses and TB clinic clerks as well as members of the state program staff.

Those served or affected by the program include clients (or patients), their families and friends as well as providers and community groups. For a TB program or initiative, this group may include TB and LTBI patients, the family and friends of patients, managers or coordinators at schools, prisons, shelters and other congregate settings. Also, representatives of populations disproportionately affected by TB are also part of this group.

The third category, the intended users of the evaluation information, may include policy makers, managers, administrators, advocates, and funders. This includes public health managers and administrators, CDC and advocacy groups.

The final sentence on the slide is urging you to include both program supporters and skeptics. Both points of view are important and having your skeptics involved in the evaluation will increase the credibility of the findings. The only caveat is that it is important to consider your skeptics point of view and motivations. For example, if funding is in short supply, a skeptic may wish to have your program funds redirected, so you need to recognize this bias. At the same time, it’s not desirable to exclude critics entirely. They may actually help to better the program and the evaluation through their skepticism.

Return to Slide Text Table of Contents

Slide 26: Which Stakeholders Matter Most?

Who is…

Affected by the program?
Involved in program operations?
Intended users of evaluation findings?

Who do we need to…

Enhance credibility?
Implement program changes?
Advocate for changes?
Fund, authorize, or expand the program?

Speaker Notes:

As noted by the explanations in the previous slides, there are many, many potential evaluation stakeholders. What is illustrated by this slide is how you go about identifying the key stakeholders for the evaluation.

The larger box at the top is the “universe of evaluation stakeholders” This is everyone that you and your evaluation team have identified under the three categories of stakeholders. Using this list, then you can begin to narrow it down to stakeholders who are key to the evaluation.

The smaller box low are the key stakeholders, a subset of all stakeholders. Generally, they include those stakeholders that will enhance the credibility of the evaluation, implement program change that result from the evaluation, advocate for recommend changes to the program and finally those stakeholders who fund or authorize the program or who could help expand the program.

If time allows at the end of the presentation, I can provide a detailed example of how to identify stakeholders and consider their role in the evaluation.

Return to Slide Text Table of Contents

Slide 27: Engaging Stakeholders

Stakeholders should be involved in…

  • Describing program activities, context, and priorities
  • Defining problems
  • Selecting evaluation questions and methods
  • Serving as data sources
  • Defining what constitutes the “proof” of success
  • Interpreting findings
  • Disseminating information
  • Implementing results

Speaker Notes:

Knowing the importance of engaging stakeholders, the next step is determining how and when to engage them. This slide offers some suggestions, although it is not an exhaustive list. Usually, whoever is leading or coordinating the evaluation takes the lead in identifying and engaging the stakeholders.

It is crucial to involve stakeholders from the onset of the evaluation and continue to engage them throughout the whole evaluation process.
This does not mean that all stakeholders will need to be involved in all stages. A small subset might be but many stakeholders might provide input in only one or two of the stages. The important thing to remember is that at least a subset of key stakeholders should be involved at some level in each step throughout the process.

The suggestions include:

Under Step 2: Describing the program activities, context, and priority;
Defining problems

Steps 3 and 4: Selecting evaluation questions and methods
Serving as data sources
Defining what constitute the “proof” of success – indicators

Step 5: Interpreting findings

Step 6: Disseminating information
Implementing results

Stakeholder involvement is particularly important in setting values and priorities. And also helping you to address the four standards of evaluation set forth in the Framework. When identifying and engaging stakeholders is properly carried out, we can be sure that the evaluation is being carried out appropriately.

Return to Slide Text Table of Contents

Slide 28: Any Questions?

Return to Slide Text Table of Contents

 

Slide 29: Your Turn... Identifying Stakeholders

  • Identify stakeholders for your program
  • Those involved in program operations
  • Persons served or affected by the program
  • Intended users of evaluation findings
  • Think about which ones you need most for…
  • Credibility
  • Implementation
  • Advocacy
  • Funding
  • List ways to keep them engaged

Speaker Notes: 

When it comes your turn to identify stakeholders, one useful way is to put it together in a table. I’ll try to describe for something that’s been used in a TB evaluation plan.
What can be useful is so divide a table into three rows of stakeholders: Those involved in the program, Those affected by the program and those who will use the results of the evaluation. The first step is to identify the types of people in each (by position or by name). For example….Read table

Then for each stakeholder listed, create a column with their interest or perspective, one with their role in the evaluation and how and one for when to engage each group of stakeholders. Thinking about it as a table is a 4 column, 3 role.

I’ll walk through a couple of examples of how you’d use the tables.

Return to Slide Text Table of Contents

CDC | TB | A Guide to Developing a TB Program Evaluation Plan - Text for PowerPoint Slides November 14, 2005

 

 
Contact Us:
  • Centers for Disease Control and Prevention
    Division of Tuberculosis Elimination (DTBE)
    1600 Clifton Rd., NE
    MS E10
    Atlanta, GA 30333
  • 800-CDC-INFO
    (800-232-4636)
    TTY: (888) 232-6348
  • Contact CDC–INFO
USA.gov: The U.S. Government's Official Web PortalDepartment of Health and Human Services
Centers for Disease Control and Prevention   1600 Clifton Road Atlanta, GA 30329-4027, USA
800-CDC-INFO (800-232-4636) TTY: (888) 232-6348 - Contact CDC–INFO
A-Z Index
  1. A
  2. B
  3. C
  4. D
  5. E
  6. F
  7. G
  8. H
  9. I
  10. J
  11. K
  12. L
  13. M
  14. N
  15. O
  16. P
  17. Q
  18. R
  19. S
  20. T
  21. U
  22. V
  23. W
  24. X
  25. Y
  26. Z
  27. #