Monitoring & Evaluation
Monitoring and evaluation of FETPs are essential practices. In order to ensure that FETPs are effective in developing needed capacities and become sustained by their host countries, a system for periodic monitoring and evaluation of outputs and outcomes is critical. The goal is to develop an effective system for monitoring and evaluation of FETPs that ultimately leads to strengthened public health systems. The evaluation workgroup, with input from Atlanta- and field-based staff, has developed programmatic indicators linked to the logic model pdf icon[30 KB, 1 Page]. This information should be helpful for the programs themselves, to document program activities, monitor and evaluate the program, implement program improvements, adjust the program to changing priorities, and ensure the program is meeting the long-term priorities. In addition a database has been developed to support program management and the tracking of programmatic indicators.
FETP Programmatic Indicators
- MOH has ownership of the FELTP (“program”).
- Plan for program sustainability exists.
- Accreditations received are documented and recognized.
- National or provincial level laboratory staff are partners in training and supporting investigations.
- Sufficient number of qualified applicants for a full training class of qualified personnel exist.
- Competencies required by the program for trainees are explicit and achievement is measured.
- Supervisory support is assessed.
- Training program is progressing towards sustainability.
- Program graduates trainees.
- Investigations of acute health events by trainees are conducted.
- Planned studies are conducted by trainees.
- Surveillance system data are analyzed and used by trainees.
- Local/regional dissemination of trainee and program work occurs.
- Presentations to international scientific conferences by trainees occurs.
- Publications in peer reviewed journals by trainees or graduates occurs.
- Strengthened public health workforce is indicated by graduates retained in national public health system.
- Surveillance system is improved/expanded by program/trainees.
- Evidence-based public health action for acute health events is improved/expanded by program/trainees.
- Evidence-based public health programs/projects is started and/or due to graduates/ program/trainees.
- Evidence-based policies/regulations is created or improved due to program/trainees.
- National and/or regional public health professional network of graduates exists.
EpiTrack-G (generic) is an MS AccessTM database application created to support FETP monitoring and evaluation activities. It includes the fields needed to collect data for the 21 program indicators (listed above). Sections are also included for tracking trainee projects, courses, and assignments. Report templates are included so that annual reports can easily be generated for each indicator. EpiTrack-G provides a starting point that can be customized for individual FETPs. Data entry screens and report templates can be edited as needed.
Pentium 233 MHz or higher processor; Pentium III recommended
Microsoft Windows 2000 Service Pack 3 or later, or Windows XP or later (recommended)
64 MB RAM (minimum); 128 MB RAM (recommended)
245 MB, including 115 MB of available space on the hard disk that contains the operating system. Hard-disk space usage varies depending up the configuration. A local installation source requires approximately 2 GB of hard-disk space during the installation; the local installation source that remains on users’ computers requires as much as 240 MB of hard-disk space beyond that required for Office.
Multisite Evaluation of Field Epidemiology Training Programs: Findings and Recommendations, May 2014
This is the first evaluation in more than a decade to examine implementation and short-term) outcomes across multiple FETPs supported by CDC. It was done to improve FETPs. CDC designed and implemented this evaluation in partnership with the Training Programs in Epidemiology and Public Health Interventions Network (TEPHINET) and participating countries. The purposes of the evaluation were to (1) document selected components of program design and implementation across all participating sites; (2) determine progress toward the intended outcomes of the program; and (3) demonstrate accountability for use of resources and results.