Creating and Finalizing the Process Evaluation Component for the CORD project

One of the major undertakings in the Childhood Obesity Research Demonstration (CORD) project was the creation of a process evaluation plan and measures for the three community demonstration sites. Because CORD is not a multi-site randomized trial, each site’s unique context and delivery of evidence-based interventions is very important. Therefore, the University of Houston’s Evaluation Center team, along with the CDC CORD Evaluator developed a method to document the processes carried out as part of interventions in each community.  These methods need to be consistent across sites for all settings in order to assess how activities were implemented and if they affected or contributed to improved outcomes.

Process evaluation tables were developed for each CORD intervention setting, including clinic, early care and education, schools and community.  Four overarching process measures were identified; Reach, Training, Education, and Policy, Systems and Environment (PSE) changes with standard definitions across all settings.  A process to capture components such as dose-delivered, dose-received and fidelity within each setting was developed.  The group also divided the process into two levels, related to who is delivering and who is the target of the process.

The Evaluation Center engaged the demonstration sites to discuss the process evaluation tables, explaining the framework, the components, definitions and respective data sources and also to verify that the desired data were correct/feasible, or if additional information was needed.

The process was at times very challenging because each demonstration site defined terms differently and intervention activities differ broadly across the sites.  The process team created standard definitions for each component which were shared with sites and used to create the tables.  Capturing/defining activities across sites consistently was also a challenge.

Through it all, the Evaluation Center identified key constructs and categorized definitions and metrics for process that are comparable across sites, which will be important for evaluating the CORD intervention components.  Through the use of process evaluation, the Evaluation Center will be able to compare planned intervention components to those actually delivered as well as barriers and facilitators that affected delivery.

Several investigators from the demonstration sites remarked that the framework helped them to think through and develop their own process measures and evaluation.  All three sites have benefited from development of the process evaluation measures and components.  It will allow for a standard, cross-site scale for all process measures and components to quantify differences in process and intervention delivery within and across sites.  A method is being developed by which demonstration sites can submit process data to the Evaluation Center.