This step was written by Scott Tytheridge, Assistant Trial Manager, ARCTEC, London School of Hygiene & Tropical Medicine.
Evaluation of programmatic outcomes is part of a vital feedback mechanism in order to periodically document whether programme activities led to an expected improvement of human health. The significance of evaluation is reflected in the third pillar of the Global Vector Control Response (GVCR) 2017-2030 which states that in order to support ‘effective locally-adapted sustainable vector control’ we must ‘enhance vector surveillance and monitoring and evaluation of interventions’ 1. This article will focus on evaluation in the context of integrated vector management (IVM).
Evaluation almost always comes together with monitoring which entails the continuous ‘tracking of programme performance’ and ‘checking progress against pre-determined objectives’ and/or indicators 1. Monitoring and evaluation have separate functions however their combination allows understanding of the cause-and-effect relations between implementation and impact 2.
Monitoring and evaluation indicators
One of the challenges of monitoring and evaluation is identifying measurable indicators. The most valuable indicators are quantitative or logical 2, 3. Some processes, however, are not able to be measured quantitatively and require a more descriptive, qualitative approach. Consequently, a combination of strong quantitative and ‘softer’ qualitative indicators is desirable 2, 3. Systematic data collection is essential for measuring progress and for making cross-comparisons with other studies, country or regional control programmes.
The improvement in the structure or implementation of vector control that IVM seeks can be viewed as the ‘outcome’. Improvement is needed in all components of the IVM strategy; thus, outcome indicators are required in all components of the IVM strategy. In the IVM Handbook 4, these components are: policy, institutional arrangements, organisation/management, planning and implementation, advocacy, communication and social mobilisation and capacity-building. Each outcome indicator should be a simplification of reality in order to make it easier to objectively determine whether progress has been made and to compare between country or regional programmes 2. However, the reality is not as simple and cannot be measured just with a simple indicator. For example, a country may not have a fully-implemented national policy for IVM but may have put measures and steps in place to begin implementation. A simple outcome indicator for ‘national IVM policy in place’ would show no improvement, however, including associated input and process indicators 2 would provide evidence for some improvement.
Evaluation entails an assessment of the impact that can be attributed to an intervention or strategy 2. However, evaluation of the impacts from IVM is more difficult than measuring outcome indicators. The main challenge of impact evaluation is attributing the observed effect to the intervention. For example, the use of IVM may have an expected impact on vector population density or composition, however this will not only be influenced by the interventions of IVM but by external factors such as climate and seasonality.
Therefore, it would be recommended to incorporate an experimental design to the evaluation plan by comparing the impact in the intervention area with a control setting without IVM. This can remove confounding factors and therefore any observed impacts can be attributed to the intervention more reliably 2. However, an ethical consideration arises with this – as in many settings, purposefully depriving a population from accessing proven vector control tools so that they may act as the control group is, correctly, unethical.
Therefore, in certain plans, ‘step-wedge’ schemes have been used as a compromise between the demands for systematic evaluation and operational requirements 2, 5, 6, 7 (Figure 1). This provides two unique benefits: 1) in resource-poor settings, the researcher can implement the intervention in smaller numbers of clusters at each time point; 2) the crossover to implementation is unidirectional, all clusters eventually receive the intervention which can reduce any ethical concerns and improve community acceptability 3 (Figure 1).
Figure 1. Step-wedge design. In these studies, the intervention (e.g. insecticide-treated nets or larviciding) is rolled out randomly to clusters in a staged fashion so that by the end of the study, all clusters will have received the intervention. Adapted from 6.
Data for evaluation
Data collection procedures should be adapted to the needs of the evaluation. For example, data for evaluation can be based on routine surveillance data in order to indicate the impact of an intervention on disease incidence or vector density. Alternatively, separate surveys may be required however if routine surveillance data is used, it most often does not require additional ethical approval 2, 3.
In summary, evaluation encompasses the episodic assessment of any changes/improvements in a target’s outcome that may be attributed to the programme/project intervention. It aims to link a particular outcome directly to that intervention or programme after a certain period of time. A functioning system of monitoring and evaluation with indicators of outcome and impact is vital for the success of the IVM strategy.
© London School of Hygiene and Tropical Medicine 2020