Skip main navigation

How do we measure the impact of development?

How/Why do we define and measure impact?
During the last weeks you’ve been introduced to the sustainable development goals, and you’ve heard many case studies and examples of investments that are being made globally to achieve these goals. The combined total investment in development programmes from international donors and local governments is enormous. Development aid alone amounted to almost $143 billion US dollars in 2016. So how do we evaluate the success of these investments? Despite the best of intentions target beneficiaries do not always experience the desired impacts sometimes this is because of lack of implementation capacity or corruption. But it also may be that we do not know whether something really worked.
This can be due to weak evaluation systems, which can lead to ineffective programmes continuing without scrutiny wasting valuable resources. Trinity College Dublin is home to the Trinity Impact Evaluation Unit, known as TIME. We are a team of eight development economists, some of whom you will hear from in a little while. Our research focuses on understanding which development programmes work, which do not, and why. TIME’s vision is to “provide strong evidence of what works, so that better investments with a real impact on people’s lives can be made.” We use frontier economic techniques to understand why a particular programme did or did not work, and what aspects of a programme would be relevant in a different context.
This is not only valuable for the organisations we collaborate with, who are seeking to understand the impact of a specific project, but also provides important learning for the wider development and academic community, but this is challenging. The key challenge in evaluating the impact of a programme lies in identifying causality. How can we truly attribute observed changes in outcomes to the programme? Ideally, we would like to see the same person with and without exposure to a programme so that we can have a good counterfactual. But a person is either exposed to a programme or is not, there is no counterfactual. We cannot simply compare outcomes for the same individual before and after the programme is implemented.
There are many other factors that could have changed and also affected outcomes. To isolate the impact of the programme, and to know whether it worked, we need to rule out all of these other factors that might also be correlated with the policy. Let’s consider a simple example. Suppose that you are leading an NGO or a government agency in a developing country, and you are looking for simple and effective ways of decreasing neonatal mortality– that is, mortality of a child in the first 28 days of life. In particular, suppose that you want to know whether and by how much just having access to prenatal care services can decrease neonatal mortality. How do you estimate that?
As a first guess, you might think that you could compare neonatal mortality rates of women that attended prenatal care services with the neonatal mortality rates for women that did not attend prenatal care services. But is this really going to tell you the impact of prenatal care? Think about it. Women that attended prenatal care might be very different from women that did not attend it. They might be, for instance, more likely to live close to a health facility, relatively wealthier, or more educated. These characteristics might themselves lead to lower neonatal mortality, even if the women did not attend the prenatal care.
So, if you want to isolate the impact of prenatal care, you need to take these additional factors into account in some ways. Otherwise, they confound your comparison. You could, for instance, decide to compare only women that live at a similar distance from a health facility and that have similar wealth or education. Still, controlling for this most obvious confounding factor is unlikely to solve all your problems. It might still be the case that women attending prenatal care services are more likely to have a relative working in the health sector, or might naturally care more about health, or they might be less afraid of doctors and hospitals.
These characteristics, which are much harder to observe than wealth or education, might lead to lower neonatal mortality, even in the absence of prenatal care. In some, you face an identification issue. By comparing the outcomes for women that attended prenatal care and women that did not attend prenatal care, you cannot be sure that the difference in neonatal mortality that you observe is genuinely driven only by prenatal care services. So what can you do? There are a number of different quantitative methods that can be used to evaluate impact, and that can be divided into main groups– experimental methods, such as randomised control trials, and non-experimental methods, which rely on observational data.
The key objective is to find two groups that are comparable in every aspect except for the intervention or the policy we want to evaluate. Randomised control trials are an effective method for impact evaluation and allow us to understand whether a specific programme works. In particular, randomised control trials the intervention can be tailored to the specific question at hand. The main idea behind randomised control trial is to randomly assign individuals into treatment and control groups. Individuals in the treatment group will be exposed to the specific intervention we want to evaluate, while individuals in the control group will not.
We can then obtain the average impact of the specific intervention by comparing outcomes between individuals in the treatment group and individuals in the control group who were not exposed to the programme but are the same in every other way. Allocating individuals at random to a treatment control group provides us with an appropriate counterfactual. This is the key feature of randomised control trials. In a randomised control trial, by design, on average the treatment and comparison group have the same characteristics, both observed and unobserved. With a large enough sample, all characteristics average out and we can obtain an unbiased estimate of the average impact of a programme or policy.
The aim is that the only difference between the treatment and the control group is the exposure to the specific intervention. This does not mean that randomised control trials are the magic solution to understanding the effect of a programme. For example, despite the randomisation the sample selected for the treatment and control groups might still by chance be different along certain characteristics. These might be hard to measure, but important for outcomes of a programme. Ultimately, for a policymaker it is crucial to know how generalizable are the findings? This requires careful consideration of a number of issues. The study population might be different from the population of interest for which we want to implement a programme.
A randomised control trial might be carried out in a small geographic area in a country while the question relevant to policymaker often is, is this a good policy at a national scale? We need to understand whether a programme that was implemented by an NGO locally could be scaled up such that government agencies could successfully roll it out country-wide. How will it matter for outcomes? Who is implementing a programme? We also need to think about whether an effect is likely to be similar when we implement the same programme with a different population in a different region, country, or continent altogether. To answer these questions, we need to know much more than whether policy had the intended effect on target individuals.
We need to know how it worked and why it worked. We need a detailed understanding of social interactions in the population, cultural factors, as well as the political economic environment, and how all these are relevant for potential outcomes. The most appropriate method to understand the effect of a programme, therefore, depends on a specific question randomised control trials are one method among many the article following this video will show you some of the other methods that can be used.

In this video, Carol, Andrea, Martina, and Gaia have talked about the importance of measuring the impact of sustainable development projects. As Carol explains, the combined total investment in development programmes from international donors and local governments was almost $143bn in 2013.

Evaluating whether and how these investments were successful can be difficult. Often, the people needing the intervention the most, do not experience the benefits. This can be due to lack of implementation capacity or corruption.

But it can be also that we do not know whether something really worked because of weak evaluation systems. This highlights the importance of strong evaluation techniques for sustainable development projects.

This article is from the free online

Achieving Sustainable Development

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now