Skip to 0 minutes and 12 seconds JYOTSNA PURI: Impact evaluations are evaluations that identify the change that is caused by a programme or an intervention. And also how to measure that change in statistically confident ways. So they are especially important in the humanitarian context because of unmet need. Let me explain. In 2014, the World Humanitarian Assistance Report came out as saying that 37% of unmet needs still existed with respect to humanitarian assistance. In 2016 the UN Secretary General’s report stated that unmet need was at 47%. Put another way, it means that humanitarian assistance essentially has to double to meet current need of at risk and vulnerable populations. This means that every dollar that’s invested for humanitarian assistance essentially has to matter.
Skip to 1 minute and 8 seconds It has to go to the right people. It has to go for the right things. And it has to be done in the right way. Which means, we in turn, need to have the evidence that all of this is working. Impact evaluations on that instrument, they can help to understand what’s working, what’s not, for whom, and how. So 3ie is supporting, and has supported 16 impact valuations in this space, and they traverse the world. They are located in Liberia, Chad, DRC, Somalia, Ethiopia, Afghanistan, Kyrgyzstan, etc. And I’m gong to recount a few learnings that we have from these programmes that are being implemented.
Skip to 1 minute and 47 seconds The first is we can’t discount the complexity of the context in designing and engaging with impact evaluations. And I don’t mean complexity in the usual sense of the word, of multiple actors, multiple programmes, changes in temporal designs, et cetera. I mean that– well, that’s definitely there, but there are also additional facets of this, which is that one, there is very little capacity on the ground to absorb and understand impact evaluations. second there’s very little data, not just at baseline, but over time as well. And there is high covariance, which means that large swaths of population and landscapes are affected at the same time, so it becomes very difficult to design impact evaluations.
Skip to 2 minutes and 33 seconds And the ethics of impact evaluations are usually called into question. And this is all compounded by the fact that there is urgent need and there are a whole lot of other priorities, which means that impact evaluations get discounted. The second is engagement with implementation. And implementation fidelity is key. What do I mean by that? Knowing as to how well the programme is rolling out is really important and not many agencies know that. But also, knowing how to be flexible with impact evaluation designs, which are being implemented on the ground is almost as important.
Skip to 3 minutes and 14 seconds And impact evaluation designs, and we’ve had to deal with this and all of the impact evaluations that we are supporting on the ground whether these are WFP programmes, whether these are UNICEF programmes, ACF programmes, you have to be able to change them very quickly so that they are nimble and flexible and answering relevant questions. The third one is that implementation fidelity, learning about implementation fidelity is key. And this especially because we want to be able to understand not just what’s working at the end, but we also want to know how things are working, what’s the frequency, what’s the dosage of a programme? And what is the sort of evidence that we have about how these programmes are working.
Skip to 4 minutes and 1 second So knowing all of that it’s key, and not very many agencies are structured currently in a way to answer those questions. Which then brings me to, I think, one of the most critical aspects of our learning, which is that engagement right from the beginning is really, really important. Engagement between the impact evaluation team and the programme that’s rolling out these programmes is really important, not just because we need to design impact evaluations around questions that are relevant, but also so that the implementing agency can be engaged in the rollout and can help to own the question as well as the results that are coming out from these impact evaluations, which adds to the credibility of these evaluations as well.
Skip to 4 minutes and 50 seconds The last two learnings here are engaging with other data. What we are finding is that there is a huge swath of data that exists in relatively unknown pockets. So for example, the Pakistan earthquake did not use, at that time, the Pakistan Living Standard Measurements Survey to understand baseline attributes of populations that were affected. There’s a whole lot of big data that’s available on conditions of earthquakes, on what are the most vulnerable areas, where is it that we are most likely to see increasing fragility and conflict. And those data sets can and should be exploited in our impact evaluations. In Africa we are working very closely, for example, with UNICEF and WFP to use data sets that they’ve already collected.
Skip to 5 minutes and 42 seconds Last but not least, we’re learning that quasi-experimental methods can and should be used for impact evaluations. They’ve been discounted till now, primarily because a lot of people think that randomisation is the only way to go ahead. But I’d really like to underscore the use of quasi-experimental data as– I beg your pardon quasi-experimental methods because they can be flexible and they can be very rigorous as well. So for example, in DRC we’re using regression, discontinuity. Our teams are using those methods to then understand the effectiveness of UNICEF programmes. So I’m going to give you two quick examples.
Skip to 6 minutes and 20 seconds The second example I want to give is of a systematic review that’s been done by Shannon Doocy and Hannah Tappis at Johns Hopkins University. The systematic review compares unconditional cash transfers with food transfers and vouchers. And it finds that unconditional cash transfers do lead to higher levels of calorific intake and also lead to higher levels of assets and savings. And this systematic review collects 108 studies, but it only looks at five countries in terms of the number of impact evaluations, but uses 10 studies also to inform cost effectiveness. And it finds as well, that unconditional cash transfers are more cost effective than any one of the other two mechanisms
Skip to 7 minutes and 11 seconds by a ratio of 2:1. And as a consequence of this systematic review, but also many other similar systematic reviews and evidence that’s been collected on unconditional cash transfers, you’d be interested to know that the WFP executive director announced at the World Humanitarian Summit that they’d be spending 25% humanitarian assistance that WFP puts out in the form of unconditional cash transfers. So this is just an example of how evidence can be used in very critical ways to influence decision making.
Evaluating impact and using evidence for decision making
In recent times we have witnessed an unprecedented number of humanitarian crises globally, but few resources have been mobilised to enable systematic learning from humanitarian interventions. Robust, rigorous, and theory-based impact evaluations are needed in order to inform policy makers and promote evidence-based action.
Here, Jyotsna Puri provides a brief overview of the ways in which impact evaluations are conducted in humanitarian settings. She also outlines some of the difficulties involved in performing these types of evaluations, and provides specific examples of work that 3ie has conducted in the field.
© London School of Hygiene & Tropical Medicine