Video Spotlight

"'I'm a Health Worker' - Abduaraman Gidi" made by IntraHealth International.

By: 
Jen McCutcheon, DrPH, MPH, MSc

This is the fifteenth chapter of the CHW Reference Guide produced under the Maternal and Child Health Integrated Program, the United States Agency for International Development Bureau for Global Health’s flagship maternal, neonatal and child health project.

In this chapter, the authors introduce the concept of program measurement as having three main purposes.  The first is to use data to help define or identify a problem needing attention or intervention. For example, data from health facility registers may indicate that utilization of a specific service is low (e.g., immunization or delivery with a skilled birth attendant).  Based on these and other available data that can help refine the nature of the problem, program implementers may design an intervention to address the problem.  The second measurement use described is program evaluation, aimed at determining the effectiveness of a program.  These studies may be done by the program implementers, or by an external group.  Evaluations generally have some degree of separation from the programmatic work, and are used more to inform the public health community about what does and does not work than to improve functioning on the existing program. While donors such as USAID and others are placing much greater emphasis on evaluations[1], the field still faces significant challenges that include: a lack of baseline data to compare endline evaluation data to project life spans that are too short to measure significant changes in impact; and numerous extraneous factors that either positively or negatively impact the success of the project. 

Finally, the authors describe the most common use of data in CHW (and other public health) programming, routine monitoring.  Monitoring data generally come from existing sources, such as health records which are aggregated up the system through the government’s health management information system (HMIS); programming or training records; or supportive supervision visits. The most effective monitoring systems are those where the data are routinely shared through simple graphs and reports and discussed regularly by program staff and managers. The authors reference “structural experiential learning”[2] to describe this rigorous, real-time tracking of important aspects of the program, with tight feedback loops and continuous use of data to improve programming.

Existing data sources such as client registries or training records provide valuable information about number of services delivered or utilized by targeted populations. In countries where these registers exist, starting an M&E system by routinely tracking and using these data can be a useful and achievable first step.  However, these data sources are not able to answer important questions about the quality of the services provided, or changes in knowledge, attitude and practice (KAP) of either target populations (e.g., did they change their behaviors based on the messages given by CHWs?), or of the providers (e.g., what influences morale or motivation for CHWs?  Has the practice of CHWs changed based on a training that was given?).  Finding answers to these questions will help to show the quality and the impact of the project and are therefore important components of a CHW M&E system, yet they are also more time consuming, costly, and often methodologically difficult.  Care needs to be taken to prioritize the questions and the methods that will enable program staff to answer key questions within a realistic timeframe, scope and budget.

Hodgins et al approach M&E system development with suggested steps to respond to three key questions/scenarios:

  1. Are the important elements of community health services being measured? If not, what is the minimum information that the manager needs to have to know what is actually happening? What is the simplest way that this information can be collected?
  2. Are the data that are being collected being used to identity and address performance issues? If not, focus on creating a culture of data use: build capacity of staff, health workers and supervisors to conduct simple analysis; discuss data findings in regular staff or facility meetings.
  3. Are there multiple or confusing data collection forms/registers that result in poor quality data (i.e., incomplete, inaccurate)? If so, then bring key technical staff together to streamline and simplify data collection forms, focusing only on key data that are most important for regular use.

The chapter ends with case studies that provide examples of data collection, monitoring, and evaluation in Ethiopia, India and Pakistan CHW programs.
 

Additional Considerations

M&E Budget Allocation

Unless an evaluation is conducted by a separate group (as some donors are doing more frequently), the costs for program monitoring and evaluation come from the same “pot” of money as the program costs.  Therefore, a balance must be struck where the maximum amount of funds are given to program costs, while still maintaining adequate funds to monitor the program enough to identify and address issues as they arise.  The percent of a program budget that is allocated to monitoring and evaluation activities is recommended to be between 3 and 10 percent of the program budget.[3]

Demystifying the Analysis of Monitoring Data

The words “data analysis” can scare some program implementers, as visions of logistical regression classes flash through their minds.  However, analysis of program monitoring data should generally be kept simple.  Most basic monitoring data that is used for assessing the progress of a program falls into one of three types of analysis: against a target, across groups, and over time.  A few examples include:

  • data often compared against a predetermined target such as aiming to train 500 CHWs this year;
  • data comparisons across groups or geographic areas within the project (e.g., across districts), or
  • data compared over time (trend analysis over each month of the project, or comparing this June compared with last June). 

Additional questions to consider when designing an M&E system for CHW programs:

Who is providing the services and who is receiving the services?  What methods are most efficient to collect data from these individuals?

What services are being provided by the CHWs, what gaps are they addressing (and what gaps still exist)?  How can data be collected on a) the services provided, and b) the quality of the services provided.  Measuring quality can be more resource intensive, so consider doing quality checks on just a sample of the services through direct observation or supportive supervision, or client exit interviews.

Where are services being provided? community vs facility, how are the community and facility services linked?  How can improvements in those linkages be measured/documented?

A sampling of data collection methods used for CHW programs:

This list of methodologies for CHW programs is by no means exhaustive.  It is meant to provide a variety of (reasonably) easy to implement options for CHW Program implementers.

  • Client records: wherever possible, linked to existing HMIS, or at least coordinated with other implementing partners and in collaboration with local officials
  • Supportive supervision: wherever possible, in collaboration with facility staff and/or local officials
  • Client exit interviews or satisfaction surveys: short surveys asked to a sample of clients about their level of knowledge after a visit and/or their satisfaction with the services that were provided
  • Direct observation of services provided: even systematically observing a small sample of CHWs or services can provide valuable information and help improve quality
  • Community/facility level quality improvement cycles that include ongoing data collection and use
  • Qualitative methods such as focus group discussions or interviews with key stakeholders
  • Mapping of services available or provided in a community (and using the map data for decision making, resource allocation, and tracking change over time)

Given the disconnect in some countries between the work conducted by CHWs and the data in national HMIS, M&E can be particularly challenging for CHW programs.  Starting from an end use perspective: “what questions do we want to answer with our data?” can help focus the M&E system development on aspects most critical to designing programs that address real needs, monitoring the implementation and quality of programs and making adaptations based on the monitoring data, and evaluating the effectiveness of CHW programs.

References:

[1] USAID Evaluation Policy. January 2011.  Available at: https://www.usaid.gov/evaluation/policy

[2] Lant Pritchett, Salimah Samji, and Jeffrey Hammer. It‘s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space.  Available at: http://www.cgdev.org/sites/default/files/its-all-about-mee_1.pdf

[3] CRS and ARC. September 2008. Shortcuts in Monitoring and Evaluation. Available at: http://pdf.usaid.gov/pdf_docs/Pnadq477.pdf
 

Dr. Jen McCutcheon (MSc(PT), MPH, DrPH)  is a public health professional with more than 15 years of experience. She specializes in capacity development, monitoring and evaluation (M&E), and quality improvement. For the past eight years, she has supported the management and implementation of projects in Ethiopia, Nigeria, South Sudan, Tanzania, Uganda, and Zimbabwe. Jen’s capacity development work spans the individual, organization, and government levels, and she has helped to develop tools to measure capacity development of both organizations (local and international) and governments (national and sub-national). She is a strong facilitator and has led M&E and sustainability trainings for staff in the US, Caribbean and throughout sub-Saharan Africa. She also guest lectures at Harvard, Boston University and Brandeis University.  Her doctoral research focused on a community and facility quality improvement approach designed to improve the quality of maternal and newborn care in rural Ethiopia.

 

 


CHW Central is managed by Initiatives Inc. Site start-up was supported by the USAID Health Care Improvement Project in 2011.

Tampa Drupal Website by Sunrise Pro Websites

© 2017 Initiatives Inc. / Contact Us / Login / Back to top