Stage 1: Formative Evaluation

Stage 2: Process Evaluation

Stage 3: Impact Evaluation

Stage 4: Outcome Evaluation


4. Evaluation at a Glance

5. Which Stage of Evaluation Are You Ready For?






Ideally, evaluation is an ongoing process that begins as soon as the idea for an injury prevention program is conceived, interweaves with program activities throughout the life of the program, and ends after the program is finished. Sometimes evaluation continues for years after the program ends to see if program effects are sustained over time. By evaluating each step, programs can catch and solve problems early, which not only saves time and money but makes success more likely.

Evaluation has four stages that are begun in this order: formative, process, impact, and outcome. Planning for each stage begins while an injury prevention program is being developed, and no stage is truly complete until the program is over. Below is a brief description of each stage.

Formative Evaluation: Formative evaluation is a way of making sure program plans, procedures, activities, materials, and modifications will work as planned. Begin formative evaluation as soon as the idea for a program is conceived. Conduct further formative evaluation whenever an existing program is being adapted for use with a different target population or in a new location or setting. A programís success under one set of circumstances is not a guarantee of success under other circumstances. For evaluation purposes, an adapted program is a new program. Another occasion for formative evaluation is when an operating program develops problems but the reason is unclear or the solution not obvious. For details, see "Formative Evaluation," page 25.

Process Evaluation: The purpose of process evaluation is to learn whether the program is serving the target population as planned and whether the number of people being served is more or less than expected. Begin process evaluation as soon as the program goes into operation. At this stage, you are not looking for results. You are merely trying to learn whether you are connecting with people in your target population as planned and whether they are connecting with you. Essentially, process evaluation involves counting all contacts with the people you are trying to reach and all events related to those contacts. For details, see "Process Evaluation," page 27.

Impact Evaluation: The purpose of impact evaluation is to measure whatever changes the program creates in the target populationís knowledge, attitudes, beliefs, or behaviors. Collect baseline information for impact evaluation immediately before, or just as, the program goes into operation. Gather information on changes brought about by the program as soon as program personnel have completed their first encounter with an individual or group from the target population. For example, the first encounter might be with a person who responded to a newspaper advertisement announcing the availability of low-cost smoke detectors for people with low incomes. Or your first encounter might be with a group of people in a class on how to install a childís car seat. Impact evaluation gives you intermediate results of the program (e.g., how much peopleís knowledge or attitudes about restraining children in car seats have changed). For details, see "Impact Evaluation," page 29.

Outcome Evaluation: For ongoing programs (e.g., a series of safety classes taught each year to all third graders in your area), conduct outcome evaluation at specified intervals (e.g., every year, every 3 years, or every 5 years). For one-time programs (e.g., distribution of a limited number of free smoke detectors to people with low incomes), conduct outcome evaluation after the program is finished. The purpose is to learn how well the program succeeded in achieving its ultimate goal (i.e., decreasing injury-related morbidity and mortality). Such decreases are difficult to measure, however, because the rates of morbidity and death due to unintentional injuries are low. Measuring changes in events that occur infrequently takes a long time (usually years) and requires a large number of study participants. However, we will show you a way to convert data on behavior change into estimates of changes in morbidity and mortality (page 64). For details on "Outcome Evaluation," see page 32.

A Word of Caution: We said above that the rates of unintentional injuries are low, which may give the impression that working to prevent them is not a good use of resources. Nothing could be further from the truth. Rates of unintentional injury may be low; nevertheless, unintentional injury is the leading cause of death for young people 1 through 34 years old and the third leading cause of death for people 35 through 54 years old.4 Working to prevent unintentional injuries is a vital public health function.

Evaluation at a Glance

Stage 1: Formative Evaluation (For details, see page 25)

When to use:

  • During the development of a new program.

  • When an existing program 1) is being modified, 2) has problems with no obvious solutions, or 3) is being used in a new setting, with a new population, or to target a new problem or behavior.

What it shows:

  • Whether proposed messages are likely to reach, to be understood by, and to be accepted by the people you are trying to serve (e.g., shows strengths and weaknesses of proposed written materials).

  • How people in the target population get information (e.g., which newspapers they read or radio stations they listen to).

  • Whom the target population respects as a spokesperson (e.g., a sports celebrity or the local preacher).

  • Details that program developers may have overlooked about materials, strategies, or mechanisms for distributing information (e.g., that the target population has difficulty reaching the location where training classes are held).

Why it is useful:

  • Allows programs to make revisions before the full effort begins.

  • Maximizes the likelihood that the program will succeed.

Stage 2: Process Evaluation (For details, see page 27)

When to use:

  • As soon as the program begins operation.

What it shows:

  • How well a program is working (e.g., how many people are participating in the program and how many people are not).

Why it is useful:

  • Identifies early any problems that occur in reaching the target population.

  • Allows programs to evaluate how well their plans, procedures, activities, and materials are working and to make adjustments before logistical or administrative weaknesses become entrenched.

Stage 3: Impact Evaluation (For details, see page 29)

When to use:

  • After the program has made contact with at least one person or one group of people in the target population.

What it shows:

  • The degree to which a program is meeting its intermediate goals (e.g., how awareness about the value of bicycle helmets has changed among program participants).

  • Changes in the target populationís knowledge, attitudes, and beliefs.

Why it is useful:

  • Allows management to modify materials or move resources from a nonproductive to a productive area of the program.

  • Tells programs whether they are moving toward achieving these goals.

Stage 4: Outcome Evaluation (For details, see page 32)

When to use:

  • For ongoing programs (e.g., safety classes offered each year): at appropriate intervals (see When To Conduct, page 32).

  • For one-time programs (e.g., a 6-month program to distribute car seats): when program is complete.

What it shows:

  • The degree to which the program has met its ultimate goals (e.g., how much a smoke detecto rprogram has reduced injury and death due to house fires).

Why it is useful:

  • Allows programs to learn from their successes and failures and to incorporate what they have learned into their next project.

  • Provides evidence of success for use in future requests for funding.

Figure 4.


Which Stage of Evaluation Are Y You ou Ready F For? or?

To find out which stage of evaluation your program is ready for, answer the questions below. Then follow the directions provided after the answer.

Q. Does your program meet any of the following criteria?

  • It is just being planned and you want to determine how best to operate.

  • It has some problems you do not know how to solve.

  • It has just been modified and you want to know whether the modifications work.

  • It has just been adapted for a new setting, population, problem, or behavior.

Yes to any of the four criteria. Begin formative evaluation. Go to page 25.
to all criteria. Read the next question.

Q. Your program is now in operation. Do you have information on who is being served, who is not being served, and how much service you are providing?

Yes. Read next question.
Begin process evaluation. Go to page 27. You may also be ready for impact evaluation. Read next question.

Q. Your program has completed at least one encounter with one member or one group in the target population (e.g., completed one training class). Have you measured the results of that encounter?

Yes. Read next question.
You are ready for impact evaluation. Go to page 29. If you believe you have had enough encounters to allow you to measure your success in meeting your overall program goals, read the next question.

Figure 5.



Description: Formative evaluation is the process of testing program plans, messages, materials, strategies, or modifications for weaknesses and strengths before they are put into effect. Formative evaluation is also used when an unanticipated problem occurs after the program is in effect.

Purpose: Formative evaluation ensures that program materials, strategies, and activities are of the highest possible quality (quality assurance). During the developmental stage of a program, the purpose is to ensure that the program aspect being evaluated (e.g., a home visit to check smoke detectors) is feasible, appropriate, meaningful, and acceptable for the injury prevention program and the target population. In the case of an unanticipated problem after the program is in effect, the purpose is to find the reason for the problem and then the solution.

When To Conduct: Conduct formative evaluation when a new program is being developed and when an existing program is 1) being modified; 2) having problems with no obvious solutions; or 3) being adapted for a new setting, population, problem, or behavior.

Target Population: Whom you ask to participate in formative evaluation depends on the evaluationís purpose. For example, if you are pilot testing materials for a new program, select people or households at random from the target population. If you want to know the level of consumer satisfaction with your program, select evaluation participants from people or households who have already been served by your program. If you want to know why fewer people than expected are taking advantage of your program, select evaluation participants from among people or households in the target population who did not respond to your messages.

Type of Information Produced by Formative Evaluation While a Program Is Being Developed: Whether the program being developed is surveillance or intervention, new or adapted, the formative evaluator's first concern is to answer questions similar to these:

  • Introduction: When is the best time to introduce the program or modification to the target population?

  • Plans and Strategies: Are the proposed plans and strategies likely to succeed?

  • Methods for Implementing Program: Are the proposed methods for implementing program plans, strategies, and evaluation feasible, appropriate, and likely to be effective; or are they unrealistic, poorly timed, or culturally insensitive?

  • Program Activities: Are the proposed activities suitable for the target population? That is, are they meaningful, barrier-free, culturally sensitive, and related to the desired outcome. For example, is the literacy level appropriate? Would a bicycle rodeo appeal to teenagers or would they see it as childish? Is a lottery for child safety seats acceptable or will some members of the population see it as gambling?

  • Logistics: How much publicity and staff training are needed? Are sufficient resources (human and fiscal) available? Are scheduling and location acceptable? For example, would scheduling program hours during the normal workday make it difficult for some people in the target population to use the program?

  • Acceptance by Program Personnel: Is the program consistent with staff ís values? Are all staff members comfortable with the roles they have been assigned? For example, are they willing to distribute smoke detectors door-to-door or to participate in weekend activities in order to reach working people?

  • Barriers to Success: Are there beliefs among the target population that work against the program? For example, do some people believe that children are safer if they are held by an adult than if they are restrained in a car seat?

Finding Solutions to Unanticipated Problems After a Program Is in Operation: If a program is already in operation but having unanticipated problems, evaluators can conduct formative evaluation to find the cause. They look at the same aspects of the program as they do during the developmental stage of the program to see 1) what is the source of the problem and 2) how to overcome the problem.

Methods: Because formative evaluators are looking for problems and obstacles, they need a format that allows evaluation participants free rein to mention whatever they believe is important. In such a case, qualitative methods (personal interviews with open-ended questions [page 38], focus groups [page 39], and participant-observation [page 39]) are best. A closed-ended quantitative method would gather information only about the topics identified in advance by program staff or the evaluator.

Occasionally, however, quantitative surveys (page 44) may be appropriate. They are useful when the purpose of evaluation is to find the level of consumer or staff satisfaction with particular aspects of the program.

How To Use Results: Well-designed formative evaluation shows which aspects of your program are likely to succeed and which need improvement. It should also show how problem areas can be improved. Modify the programís plans, materials, strategies, and activities to reflect the information gathered during formative evaluation.

Ongoing Process: Formative evaluation is a dynamic process. Even after the injury prevention program has begun, formative evaluation should continue. The evaluator must create mechanisms (e.g., customer satisfaction forms to be completed by program participants) that continually provide feedback to program management from participants, staff, supervisors, and anyone else involved in the program.


Description: Process evaluation is the mechanism for testing whether the programís procedures for reaching the target population are working as planned.

Purpose: To count the number of people or households the program is serving, to determine whether the program is reaching the people or households it planned to serve, and to determine how many people or households in the target population the program is not reaching.

When To Conduct: Process evaluation should begin as soon as the program is put into action and continue throughout the life of the program. Therefore, you need to design the forms for process evaluation while the program is under development (see examples in Appendix B).

Important Factor To Consider Before Beginning Process Evaluation: The evaluator and program staff must decide whether the program should be evaluated on the basis of the number of people contacted or the number of contacts with people. The distinction is important.

For number of people contacted, count only once each person in the target population who had contact with your program regardless of how many times that person had contact.

For number of contacts with people, count once each time the program had contact with a member of the target population regardless of how many times some people had contact.

Obviously the number of contacts with people should be the same as, or higher than, the number of people contacted.

This distinction is especially meaningful when a person may receive independent value or additional benefit from each contact with the program.

Target Population: For process evaluation, the target population is the people or households that you actually reached, whereas your programís target population is the people or households you want to reach.

Methods: Keep track of all contacts with the people or households who are served by the program. If appropriate, keep track of all program-related items distributed to, or received from, the target population.

  • Direct Contacts: One method of keeping track of direct contacts is to use simple encounter forms (see Appendix B for an example), which can be designed to collect basic information 1) about each person or household that has direct contact with the program and 2) about the nature of the contact. Using these forms, you can easily count the number of people or households served by your program, the number of items distributed during a product distribution program, or the number of items returned to a product-loan program. 

    The forms must be designed while the program is being
    developed and ready for use as soon as the program begins.

  • Indirect Contacts: Not all contact with a program is direct. A programís target population may be reached directly, indirectly, or both. For example, many school-based programs provide information to schoolchildren (direct) who, in turn, take the information home to parents (indirect). Other programs train members of the target population as counselors (direct) to work with their peers in the school community (indirect). Such methods have been used by programs to promote the use of bicycle helmets. Often, a programís stated purpose is to reach community members through indirect methods. Often also, programs have an indirect effect that was not planned.

To estimate the number of people the program reaches indirectly, you could ask the people with whom the program has direct contact to keep track of their contacts (the people to whom they give the programís information or service). For this purpose, they could use a system similar to the programís system of keeping track of its direct contacts.

Sometimes, however, asking people with whom the program has direct contact to keep track of their contacts is impractical, unreliable, or both. In such a case, devise a reliable method for estimating the number of indirect contacts. For example, you could estimate that half the third graders who attended a safety training program would speak to their parents about the information given to them.

  • Items Distributed or Collected: Example forms for use in tracking items collected from the traget population or given away during a safety product distribution campaign are in Appendix B.

How To Use Results: Use the results of process evaluation to show funding agencies the programís level of activity (i.e., the number of people or households who have received the programís service).

If process evaluation shows some unexpected problems, especially if it shows you are not reaching as many people in the target population as you expected to, do some more formative evaluation. That could include, for example, personal interviews with a random selection of people in your target population who had not participated in your program.

In addition, much of the information gathered during process evaluation can be used for impact and outcome evaluation when you will be calculating the effect your program has had on the target population.


Description: Impact evaluation is the process of assessing the programís progress toward its goals (i.e., measuring the immediate changes brought about by the program in the target population).

Purpose: To learn about the target populationís changes in knowledge, attitudes, and beliefs that may lead to changes in injury-prevention behavior. For example, evaluators might want to know whether people are more likely to buy a bicycle helmet or smoke detector than they were before the program began. Or they might want to know whether people understand better the risks associated with not wearing seatbelts while driving.

At this stage, evaluators are not necessarily measuring changes in behavior (e.g., increases in the number of people using a bicycle helmet or smoke detector). Although information about behavior could be used to measure impact, it is a better measure of program outcome, which is the final stage of evaluation (see page 32). To qualify for funding, programs need to incorporate evaluationóat least as far as the impact stageóinto their program design.

When To Conduct: Take baseline measurements of the target populationís knowledge, attitudes, and beliefs immediately before the first encounter with the target population (e.g., before the first training class or before any products are distributed). Begin measuring changes in knowledge, attitudes, and beliefs immediately after the first encounter (see Methods, beginning at the bottom of this page).

Target Population: For impact evaluation, the target population consists of people or households that received the program service.

Design: Well-designed impact evaluation has two aspects:

  • It measures the baseline knowledge, attitudes, and beliefs of the target population and demonstrates how these change as a result of the program.

  • It eliminates the possibility that any demonstrated change could be attributed to some factor outside the program See page 51 for program designs (experimental and quasi-experimental) that control for the effect of outside influences on your programís results.

Outside Influences To Be Eliminated as Explanations for Change: The two main influences that must be eliminated as explanations for change in the participantsí knowledge, attitudes, beliefs, or behaviors are history and maturation. See page 54 for a full discussion.

Methods: Measure the target populationís knowledge, attitudes, beliefs, and behaviors before any individual or group receives the program service (baseline measurement) and again after the first person or group receives the service. Compare the two measurements to find out what changes occurred as a result of your program. Be careful not to conclude that your program brought about all change shown by these comparisons.

Knowledge, attitudes, and beliefs are almost always measured by a survey instrument, such as a questionnaire, containing closed-ended items (e.g., multiple-choice questions). For example, you could ask each person attending a training class to complete a questionnaire before and after the class to find out how much their knowledge, attitudes, and beliefs changed as a result of the training program.

Tip: Since you have a captive audience in a training class, you could also test their satisfaction with the class materials and the way the class was conducted (formative evaluation).

For information on conducting a survey, see page 44. For examples of close-ended items for survey instruments, see page 104.

Occasionally, however, knowledge, attitudes, and beliefs are assessed by direct observation. For example, an observer might check to see that seatbelts are positioned correctly or smoke detectors installed correctly. Evaluators might also observe group discussions to watch and listen for signs of participantsí attitudes or beliefs (see "Participant-Observation," page 39). Observation is often more costly, less efficient, and less feasible than administering a survey instrument. For suggestions about events to observe during an evaluation, see page 89.

How To Use Results: If the results are positive, you can use the results of impact evaluation to justify continuing your program. If the results are negative, you can justify revising or discontinuing the program. Obviously, if impact evaluation shows that the program is ineffective, outcome evaluation is not necessary. Programs with positive results are likely to receive further funding; programs with negative results obviously will have a more difficult time getting funds. However, if an evaluator can show why the program was ineffective and how it could be modified to make it effective, the program may be able to justify receiving further funds.



Description: Outcome evaluation is the process of measuring whether your program met its ultimate goal of reducing morbidity and mortality due to injury.

Decisions about whether to continue funding a program often depend on the results shown by outcome evaluation. And, as you know by now, good quality outcome evaluation depends on a good design for the injury prevention program itself.

Purpose: The only difference in purpose between impact evaluation and outcome evaluation is the program effect that is measured. Impact evaluation measures changes in knowledge, attitudes, beliefs, and (possibly) preventive behaviors. Outcome evaluation measures changes in preventive behaviors and in injury-related morbidity and death.

When To Conduct: For ongoing programs (e.g., a series of fire-safety classes given each year in elementary schools), conduct outcome evaluation as soon as enough people or households have participated in the program to make outcome evaluation results meaningful. Depending on the number of children in fire-safety classes, you could conduct outcome evaluation, for example, every year, every three years or every five years.

For one-time programs (e.g., a six-month program to distribute free smoke detectors to low-income households), begin outcome evaluation as soon as the program is complete. Consider also conducting outcome evaluation, for example, a year or three years after the program is complete to find out how well the programís effects are sustained over time.

Preparation for outcome evaluation, however, begins when the program is being designed. The design of the program affects the quality of the data you will have for outcome evaluation. Furthermore, baseline data must be collected immediately before participants have their first encounters with the program.

Target Population: For outcome evaluation, the target population is all the people or households that received the program service.

Methods: The methods used for measuring changes in behavior are essentially the same relatively easy methods as those used to measure changes in knowledge, attitudes, and beliefs during impact evaluation. In general, however, measuring changes in morbidity and mortality is not so easy. For example, you can measure the change in helmet-wearing behavior of children who participated in a safety training class soon after the class is over (see Methods, page 30, in the section on impact evaluation). Measuring the reduction in morbidity and mortality as a result of those same childrenís change in behavior is much more difficult.

A major cause of this difficulty is that the number of people who will die or suffer serious morbidity as a result of most unintentional injuries is small. In contrast, everyone has a certain attitude and behaves in a certain way with regard to the injury-preventive devices (e.g., bicycle helmets or smoke detectors) that your program is encouraging people to use. Therefore, documenting changes in morbidity and mortality that are directly the result of a program to reduce most unintentional injuries requires a vastly larger study population than does documenting changes in attitudes, beliefs, and behaviors.

In addition to a large study population, documenting changes in morbidity and mortality requires a long-term study, which is both expensive and time consuming.

So what to do? Convert data on behavior change into estimates of changes in morbidity and mortality.

When a long-term study is not feasible, you can convert the more readily accessible information on changes in behavior into estimates of changes in morbidity or mortality.

However, to do so, you must have three items of information:

  • Data showing the effectiveness of the behavior in reducing morbidity or mortality (see page 119 for resources with data on various types of unintentional injury).

  • Data showing the prevalence of the behavior before the program began (data on the pre-program behavior of the target population).

  • Data showing the prevalence of the behavior after the program is complete (data on the post-program behavior of the target population).

Using these three sets of data, you can perform a simple series of calculations, which will estimate the number of lives saved by the program. See page 64 for full details on how to do the calculation.

Reiterating the Main Point: To calculate morbidity and mortality on the basis of behavioral data, you will need information that links the behavior in question to an already-calculated risk for morbidity and mortality. Fortunately, many such data are available.

You can also convert data on behavior change into estimates of financial savings (see page 66 for details).

How To Use Results: You can use positive results of outcome evaluation as even stronger evidence than the results of impact evaluation to justify continued funding for your program. If the results (positive or negative) are likely to be of value to researchers or other programs to prevent unintentional injury, you may also be able to publish them in scientific journals.

Possible Future Study: For a behavior for which data are not available on the relationship between that behavior and risk for death or injury, you might consider doing a study to produce this information. You could justify the cost by stressing the importance of quantifying relationships between a certain behavior and risk for morbidity and mortality. The data produced by your study could then be published and used in outcome evaluation by other injury prevention programs.



This page last reviewed April 1, 2005.

Privacy Notice - Accessibility

Centers for Disease Control and Prevention
National Center for Injury Prevention and Control