Skip main navigation

How do we know if an intervention is effective?

TBC
10.4
In this step, we’ll learn about impact evaluations. Impact evaluations are an increasingly common part of evidence-based programming for persons with and without disabilities. Designing and conducting impact evaluations is an entire specialism. But at their heart, there is just a few key ideas, which we’ll cover in this session. The first question you might have is what is an impact evaluation? An impact evaluation is using data and research designed to work out what effect a programme, policy, or other intervention has had on outcomes we are interested in. For example, an impact evaluation could be used to answer the question did our programme reduce symptoms of anxiety among our target population?
55.2
In practise, impact evaluations and monitoring and evaluation systems will have similarities in terms of the data collected. However, M&E systems are best thought of as being part of the programme, while the impact evaluation is taking a view from outside of the programme. Think about how much a good M&E system can improve a programme. The strong M&E system is the intervention, or at least part of it. M&E allows progress to be checked and alterations to the programme to be made accordingly. The impact evaluation would ask what effect does this programme, including the M&E system, have on outcomes?
97.3
You might now be wondering why would we want to do an impact evaluation to work out the effects of the intervention. There are at least three reasons why impact evaluations are useful. The first is that they can tell us what is working and what is not working. If our impact evaluation found our programme had not reduced the outcome of interest, such as anxiety levels, then we might want to stop or change the programme to make it better. The second is that programmes may need to be held to account. This may be to the government, the funder, or to the local community.
130.4
Since there are many costs to programmes, such as in time, money, and people’s energy, it can be important to know that it’s having some benefit. The third is that it can be useful for future decision-making to know what benefits can be achieved for the cost of the programme. The results of an impact evaluation can be used along with detailed costing data to work out the cost-benefit of the programme. Policymakers can use this information to make efficient choices about scarce resources.
163
So now we know that impact evaluations are about the effects of programmes and why they’re needed. So how are they done? The core of all impact evaluations is measuring the outcomes for people who were offered the programme and then trying to work out what the outcomes would have been had they not been offered it. The difference between the outcomes they have and the outcomes they would have had without being offered the programme is the effect of the programme.
188.5
For example, if 20% of people in our target population have moderate or severe anxiety symptoms after taking part in the programme, and that it would have been 40% without the programme, then the effect of the programme was to reduce anxiety levels by 20 percentage points.
208.3
You may be wondering, but how can we know what would have happened without the programme if, in reality, people were offered it? And if you’re wondering that, then you have identified the central challenge for all impact evaluations. There is a problem. We cannot observe people both with and without the programme, at the same time. So what can we do? Well, we can measure people who did not take part in the programme and use their outcomes as an estimate of what would have happened had our target group not taken part. To make sure that this is a fair comparison, often researchers will create two groups of people at random and only offer the programme to one group.
255.7
Then, after the programme is complete, they can compare the groups. Since the groups were determined at random, there should be very few differences between them other than that one had the programme while the other did not. This is what is done in a Randomised Controlled Trial, or RCT.
277.1
There are sometimes problems with using randomization. All alternative methods require more complicated statistical approaches and data, but are essentially all trying to estimate what would have happened in the group who were offered the programme if the programme had not been offered to them. We have mentioned that to do an impact evaluation, we need to measure the outcomes that we’re interested in, usually both from the people who were offered the programme and from some people who were not. These data are usually collected with a questionnaire. We might have multiple outcomes that we’re interested in, and the questionnaire would need to ask about all of these.
315.1
Since we want our comparison group to be as similar as possible to the people who were offered the programme, we need to ensure that the data is collected in exactly the same way for all participants. We said at the start that impact evaluations are about working out the effects of programmes. Unfortunately, not all effects of programmes are good ones. If we can pre-empt the problems that the programme might cause for some people, then we can include relevant questions in the questionnaire. However, sometimes there are unforeseen consequences that would be missed if we only collected data in this way.
349.4
That’s why a good impact evaluation will also collect qualitative data to allow for open-ended discussion of the benefits and the harms of the programme, as seen from the perspective of the programme participants themselves. In the next step, you’ll learn about process evaluation, which complements impact evaluation, to learn more about how a programme worked or didn’t work. In summary, you should now understand impact evaluations tell us the effect of programmes. We do impact evaluations for many reasons, but comparing what happened with what would have happened is the core idea in impact evaluation. That doing so is difficult and needs careful comparisons with people who were not offered the programme. And that qualitative data can help identify any unanticipated effects.

In this step, Dr Calum Davey introduces impact evaluations. Impact evaluations are an increasingly common part of evidence-based programming for persons with and without disabilities. An impact evaluation is using data and research design to work out what effect a programme, policy, or other intervention has had on outcomes we are interested in.

Dr Davey is an Assistant Professor at the International Centre for Evidence in Disability.

As ever, please share your thoughts on this step below.

This article is from the free online

Global Disability: Research and Evidence

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now