Enhanced Evaluation Framework

This enhanced version of CDC’s Evaluation Framework adds the how of evaluation practice to the Framework’s steps and standards. As evaluators, our ability to be aware of ourselves, others, and the broader environments in which we work—the how of our practice—is equally as important as the technical steps we complete in an evaluation. Similarly, producing high-quality evaluative judgments requires adequate evaluation capacity. The enhanced framework is described in detail in the e-text Planting the Seeds for High-Quality Program Evaluation in Public Health [PDF – 13 MB]. The American Evaluation Association’s Evaluator Competencies also reference these skills and characteristics.

What is the Enhanced Evaluation Framework?

In 1999, the Centers for Disease Control and Prevention (CDC) published its Framework for Program Evaluation in Public Health.1 In 2020, CDC’s National Asthma Control Program (NACP) created an enhanced version of the framework. This enhanced version of CDC’s Evaluation Framework adds the how of evaluation practice to the Framework’s steps and standards. As evaluators, our ability to be aware of ourselves, others, and the broader environments in which we work—the how of our practice—is equally as important as the technical steps we complete in an evaluation. Similarly, producing high-quality evaluative judgments requires adequate evaluation capacity. The enhanced framework is described in detail in the e-text Planting the Seeds for High-Quality Program Evaluation in Public Health [PDF – 13 MB]2. The American Evaluation Association’s (AEA) Evaluator Competencies3 also reference these skills and characteristics. This brief serves as an introduction to the Framework enhancements.    

The Enhanced Evaluation Framework emphasizes the role of critical reflection, interpersonal competence, cultural responsiveness, and situational awareness in evaluation practice. The enhanced framework also acknowledges the need for ongoing evaluation capacity building to conduct high-quality evaluations.

Critical Reflection

Who we are as people is central to who we are as professionals and how we engage in our professional practice. As much as we may see our personal and professional selves as separate entities, the distinction is likely minimal, if it exists at all. Our beliefs, values, and assumptions inform our worldview, which in turn shapes how we make sense of and participate in the world around us, whether at work or outside of work.4,5 However, worldviews can also be problematic in that they, often unbeknownst to us, prescribe what we do and do not pay attention to. Worldviews also influence how we perceive and judge information. This can present both quality and ethical limitations in our evaluations.5, 6 Worldviews are susceptible to dominant ways of thinking that uphold existing power relationships and inequities; they can reinforce what currently exists rather than support change.

Critical reflection enables us to unearth and scrutinize the values, beliefs, and assumptions underlying our worldviews so we can understand, challenge, and modify them.5,6 Approaches to critical reflection can, and should, take many forms using multiple lenses.7 Some of us are more comfortable with solitary pursuits like structured journaling or creating rich pictures, while others appreciate a little help from a critical friend(s) who can bring additional perspectives to our reflections. The e-text provides reflection questions that are relevant to the evaluation context, such as these examples:

  • What and whose values underlie ideas about successful outcomes in this program?
  • To what extent will program outcomes reinforce existing inequities? How does that relate to my intentions as a professional and as a person?
Interpersonal Competence

Recognizing that relationships are at the heart of our work, the AEA Evaluator Competencies include an entire domain dedicated to interpersonal competence. The interpersonal competence domain highlights the importance of establishing trust, understanding power and privilege in context, understanding how power and privilege affect an evaluation, and being able to address conflicts that may arise.8 Many evaluators are well-trained in methodology but have limited training or practice in important tasks like facilitating difficult conversations, helping partners to articulate their views and understand other points of view, and guiding shared problem-solving and consensus making.

Most evaluation frameworks emphasize the value of engaging people with varying viewpoints on a program in both the planning and implementation phases of evaluations, such as program participants, staff members, funders, and even program critics. When we bring together this diversity of perspectives, we often encounter some level of conflict as differing worldviews, concerns, and priorities interact. If we ignore or poorly address conflict, we may limit participation, consume time and other resources, and even derail the evaluation process entirely. When we address conflict directly and skillfully, we can experience its generative power, producing an evaluation and findings that truly serve the program’s intended beneficiaries. Some tools to capture the generative potential of conflict include the following:

  • Mediation
  • Negotiation
  • Non-violent communication9
  • Transformative justice10
Situational Awareness

Evaluations are conducted in a complex world, and even the best-laid evaluation plans do not always come to fruition. Given that reality, we need to be acutely aware of the situations in which we implement our evaluations. We should be prepared to adapt to the ever-evolving context in which, for example, staff members come and go, program priorities shift, budgets change, and new questions surface that need immediate attention. Ideally, we plan for an ever-changing context by building flexibility into work plans and budgets in the design phase of an evaluation. Doing so leads to stronger evaluations, can save time and money, and can prevent potential problems.

The American Evaluation Association’s evaluator competencies give examples of contextual factors that can influence a project, such as politics/economics, site/location/environment, participants/stakeholders, organization/structure, culture/diversity, history/traditions, values/beliefs, and power/privilege.3 The evaluation’s purpose, budget, and workplans, along with the timing of programmatic events, are also important contextual considerations. Note that the enhanced version of the evaluation framework adds a step 0 to account for the need to assess context and promote situational awareness.

The way we shape our awareness of a situation is grounded in our ability to critically reflect on our position and culture. It is also rooted in relationships. As such, situational awareness sits at the intersection of critical reflection, interpersonal competence, and cultural responsiveness.

Evaluation Capacity

Program evaluation is evolving from a field in which (often) external experts determine the effectiveness or value of a program to one in which evaluators (internal or external) act as co-learners to develop value in a program.14 To produce and use evaluation insights—to learn—requires that organizations have both individual supports and organizational infrastructure.

At the organizational or infrastructure level, evaluation capacity may include leadership acting as champions of evaluation or ensuring that staff position descriptions reference evaluation activities (whether contributing to evaluations or using evaluation findings). Evaluation capacity can also refer to more mundane factors like having appropriate data analysis software or providing for electronic storage of evaluation materials that staff members can access easily.

At the individual level, evaluation capacity comes in the form of skills and attitudes. While few individuals possess all the skills included in the evaluator competencies, the list illustrates the types of skills that individuals or teams should possess to carry out high-quality evaluations. Relevant attitudes toward evaluation include the belief that evaluation is a worthwhile investment and that incorporating a diversity of perspectives in a study is essential.

We can choose among many strategies to build evaluation capacity in our organizations.15 Prior to selecting specific strategies, though, it is important to consider what types of capacities are needed, for which people or groups, and why. Evaluation capacity building strategies at the individual level might include providing trainings and workshops or hosting communities of practice. We also can learn by doing. Participating in or sponsoring evaluations can foster learning, particularly when we pause to reflect on the process.

References
  1. Centers for Disease Control and Prevention [CDC]. (1999). Framework for program evaluation in public health. Morbidity and Mortality Weekly Reports, 48(11), No. RR-11.
  2. Wilce, M., Fierro, L.A., Gill, S., Perkins, A., Kuwahara, R., Barrera-Disler, S., Orians, C., Codd, H., Castleman, A.M., Nurmagambetov, T., Anand, M. Planting the Seeds for High-Quality Program Evaluation in Public Health. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Environmental Health, Division of Environmental Health Science and Practice, Asthma and Community Health Branch. June 2021.
  3. American Evaluation Association [AEA]. (n.d.). The 2018 AEA evaluator competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies
  4. Mertens, D. M., & Wilson, A. T. (2019). Program evaluation theory and practice: A comprehensive guide (2nd ed.). Guilford Press.
  5. Mezirow, J. (1990). Fostering critical reflection in adulthood: A guide to transformative and emancipatory learning (1st ed.). Jossey-Bass Publishers. https://www.wilder.org/wilder-research/resources-and-tools
  6. Mezirow, J. (2009). An overview of transformative learning. In K. Illeris (Ed.), Contemporary theories of learning (pp. 91–105). Routledge.
  7. King, J.A., & Stevahn, L. (2020). Presenting the 2018 AEA evaluator competencies. In J.A. King (Ed.), The American Evaluation Association’s Program Evaluator Competencies. New Directions for Evaluation, 168, 49–61.
  8. Archibald, T., Neubauer, L.C., & Brookfield, S.D. (2018). The critically reflective evaluator: Adult education’s contributions to evaluation for social justice. New Directions for Evaluation, 158, 109–123.
  9. Rosenberg, M. (2003). Nonviolent Communication: A Language of Life. PuddleDancer Press.
  10. brown, a., (2020). We Will Not Cancel Us: And Other Dreams of Transformative Justice. AK Press.
  11. Hood, S., Hopson, R. K., & Kirkhart, K. E. (2015). Culturally responsive evaluation. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (4th ed., pp. 281–317). John Wiley & Sons, Ltd.
  12. American Evaluation Association [AEA]. (2011). Statement on cultural competence in evaluation. https://www.eval.org/About/Competencies-Standards/Cutural-Competence-Statement
  13. U.S. Department of Health and Human Services [DHHS]. (2014). Practical strategies for culturally competent evaluation. Centers for Disease Control and Prevention. https://www.cdc.gov/dhdsp/docs/Cultural_Competence_Guide.pdf
  14. Schwandt, T.A., & Gates, E.F. (2021). Evaluating and Valuing in Social Research. Guilford Press.
  15. Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459.

Disclaimer: The information presented in this document is that of the authors and does not represent the official stance of the Centers for Disease Control and Prevention.

View Page In: PDF [278K]