Skip to 0 minutes and 0 secondsThis was a randomized experiment in Sierra Leone. Trying to study the local institutional reform that tried to improve local governance in rural Sierra Leone. So this was a program that was implemented just a couple years after the end of the civil war in Sierra Leone. Which was a very brutal civil war.
Skip to 0 minutes and 18 secondsAnd the idea was that rural institutions in Sierra Leone were weak and quite undemocratic before the war. They were dominated by traditional chiefs. There was very little scope for meaningful participation in local politics in rural areas. But the hope was, look, maybe we can try out some new models at the village level. And if they work, extend them nationwide to bring much more grassroots democracy into villages in Sierra Leone. To make sure we don’t have the dissatisfaction and the anger that contributed to the civil war. Okay. So what is the program doing? Let’s get into the nitty gritty about what this program does. Three elements. Number one, big grants. Huge transfers. This is very typical for community driven development.
Skip to 1 minute and 1 secondBig cash transfers come in as part of the program. It’s not just institutional reform, it’s also resources for these new institutions to use. It’s the equivalent of about 100 dollars per household in a transfer. In a place where per capita income, in exchange rates terms, is maybe 300 or 400 dollars. This is a lot of money. It’s a big transfer.
Skip to 1 minute and 24 secondsAnd it was delivered in three tranches so there could kind of be multiple projects funded by these communities.
Skip to 1 minute and 32 secondsThe second piece was a training piece to make government more efficient. There was training in bookkeeping and financial practices. There was a lot of contact. One day a week, for three and a half years, there was a representative of the project that spent a day in the village training local government officials. Holding meetings, trying to elicit preferences for local projects. One day a week for three and a half years. It was a huge investment actually. It was very expensive. And as part of that, an elected village development committee was formed. So, an elected government.
Skip to 2 minutes and 10 secondsFinally, there were some mandates to increase representation by marginalized groups. There had to be some female representation on the village development committee. And a certain share of money was directed to projects run by women’s groups and youth groups. And the youth could be male or female youth groups, but the idea was to get some of the money out of the hands of the male elders.
Skip to 2 minutes and 34 secondsOkay. This is just a map of Sierra Leone. You can see Freetown over here. There is a political party that dominates the north and a political party that dominates the south. So there was one study area in both regions for political reasons. And then treatment and control was randomized within these areas. So what did people spend the money on? When they got all these cash transfers and these committees were set up and they had all these meetings and took votes to figure out what to spend money on, they spent it on a bunch of different things. But there were really two main categories. One was just infrastructure. They built latrines. They built school class rooms. They built wells.
Skip to 3 minutes and 10 secondsThey built bridges. So if you’re living here and there is a river between the two sides of your village, or between your village and the next one, it’s actually really important to have a bridge of some kind.
Skip to 3 minutes and 21 secondsSo that was about half the money. They also built – this was a pretty common one – drying floors. Rice drying floors. A flat, concrete surface where you can dry your rice before selling it. It turns out that if you don’t have that, you have to dry your rice on the dirt. A lot of rocks get in there and dirt gets in there. So it’s actually very important to have a communal drying floor. That was another common investment. Livelihood activities. A lot in agriculture, seeds, livestock, things to invest in agriculture. And a chunk went outside of agriculture. Training. There was some that went into training to become a carpenter, or to become, you know, to have a skill.
Skip to 4 minutes and 0 secondsSome went into buying tools for soap making. So a little bit of small business activity. Especially some of the money that went to the women’s groups and the youth groups here.
Skip to 4 minutes and 11 secondsSo what do we find? This is a good program. This was well implemented, well monitored. There was a lot of money and it had some positive benefits. So, first of all, the village development committees were set up. Like, the things the project said it would set up, the local institutions after the, during the four years, really were set up. And we can trace that 87 percent of the money that was sent from the capital actually was spent locally. The other 13 percent, we can’t exactly track. Some of it may have been leaked or stolen. Some of it could just be bad bookkeeping. So this is the combination of those two things. This is a very low leakage level.
Skip to 4 minutes and 50 secondsSo this is a good program. Stuff was dispersed and things got built. Contractors built things. Along every measure there were increases in the stocks of all the relevant public goods. The latrines, the drying floors, the schools. Stuff really got built. That was great! But, when we look at the other class of outcomes, and this is where things get controversial. And this is where the research design becomes really critical in some ways. We don’t find impacts. Again, we had hundreds of measures of gender inclusion, of democratic decision making, on social capital, on community participation and governance. All these positive attributes of active local governance and democratic governance. We don’t find impacts on them.
Skip to 5 minutes and 35 secondsWe don’t find increased collective action capacity in terms of fundraising after the grants run out.
Skip to 5 minutes and 43 secondsOkay, so nothing sort of changed. So, we’re collecting this data. We’re getting these results in. And at this point we’re really happy we had a pre-analysis plan. And why is that? Basically, right at the point we are getting all of our data in and we’re registering our pre-analysis plan, a qualitative study – A very good qualitative study was done by some researchers funded by the World Bank. That went to, I think, 12 communities and spoke to people. Maybe it was 15 communities out of the 236. And really spoke in depth to residents in these different communities. And they wrote an incredibly strong report making the case that there was greater gender inclusion. There was greater youth inclusion.
Skip to 6 minutes and 23 secondsThere were all kinds of benefits in terms of local democracy and governance.
Skip to 6 minutes and 29 secondsSo maybe it was just the sample of villages they looked at, but we don’t get those results.
Skip to 6 minutes and 37 secondsAnd, you know, our concern a little bit, and this is true of experimental research as much as qualitative – is it you have a lot of leeway in choosing the outcomes you focus on,
Skip to 6 minutes and 49 secondsthere could be cherry picking. Or there could be selective reporting. So we decided to use a pre-analysis plan. So how do we do that? Right at the beginning of the program, we were already interested in this notion of a pre-analysis plan. And already a little bit concerned that there were a lot of outcomes people were talking about. There were a lot of different dimensions. A lot of things that could be affected. So, at the beginning of the program, 2005, we got our partners to agree to a set of outcomes and hypotheses already. So, ultimately, we have 12 families of hypotheses – 12 sets of hypotheses. We already had 10 of them in 2005.
Skip to 7 minutes and 25 secondsSocial capital, gender inclusion, participation, trust – all that stuff that you would think of. We wrote those in and we have a hypotheses document. And about half the outcomes were already in the baseline survey.
Skip to 7 minutes and 40 secondsAnd, you know, I think again, the intellectual point is tying our hands a little bit and forcing ourselves to report these outcomes prevents us from just telling a great story.
A pre-analysis plan example: A Sierra Leone study on GoBifo local institutional reforms
A few years after the end of a violent civil war in Sierra Leone, three researchers from the Abdul Latif Jameel Poverty Action Lab (J-PAL) and UC Berkeley, myself included (Prof. Miguel, here!) carried out a Randomized Controlled Trial (RCT) to evaluate the impacts of an intervention that sought to strengthen public participation in governance institutions. The intervention, called GoBifo, involved cash transfers, trainings, and outreach to historically marginalized groups. Their pre-registered study demonstrated some positive impacts of the project, but not necessarily the intended benefits.
What makes our use of a pre-analysis plan more interesting is that a simultaneous, qualitative study carried out by the World Bank found much more positive impacts. The World Bank study, however, was not registered, calling into question whether or not they selectively reported results. Which do you think is more valid?
This article is a good example of when a pre-analysis plan was useful. Katherine Casey, Rachel Glennerster, and I exploited the random assignment and participation of communities in a governance program to evaluate its impact. Our use of a pre-analysis plan turned out to be very valuable in improving the validity of our findings.
We set out to assess whether a “community-driven development” project, implemented in post-conflict Sierra Leone, improved local economic outcomes and institutional capacity in the form of democratized action, decision-making, and inclusivity. We found short-run positive effects on local economic outcomes, but no evidence of strengthened institutions.
In the paper, we discuss the general benefits of using a pre-analysis plan, or PAP:
“While the experimental framework naturally imposes some narrowing of econometric specifications, there is still considerable flexibility for researchers to define the outcome measures of interest, group outcome variables into different hypothesis ‘families’ or domains, identify population subgroups to test for heterogeneous effects, and include or exclude covariates. PAPs are arguably particularly valuable, therefore, when there are a large number of plausible outcome measures of interest and when researchers plan to undertake subgroup analysis.”
Moreover, “[t]he process of writing a PAP may have the side benefit of forcing the researchers to more carefully think through their hypotheses beforehand, which in some cases could improve the quality of the research design and data collection approach.”
We were careful to include some of the risks and trade-offs involved in using a PAP, including the concern “that important hypotheses will be omitted from the initial plan,” and “that the exact econometric specification laid out in advance does not describe the data as well as one that would have been chosen ex post if the authors had first ‘let the data speak,’ potentially leading to less precise estimates.”
We offer suggestions of ways to mitigate these risks and “advocate a compromise position that allows some researcher flexibility accompanied by the ‘price tag’ of full transparency — including a paper trail of exactly what in the analysis was prespecified and when, and public release of data so that other scholars can replicate the analysis — with the hope that this approach will foster the greatest research progress.”
Overall, the value of using a PAP is clear, especially when the results of this registered study are compared with those of an unregistered evaluation of the same program. As explained in the previous video, an unregistered study carried out by the World Bank found more positive impacts than did we. It is difficult to tell, however, if selective reporting or specification searching was involved in the analysis.
You can read the whole paper, the Pre-analysis plan, and a summary of our evaluation for J-PAL by clicking on the links in the SEE ALSO section at the bottom of this page.
Casey, Katherine, Rachel Glennerster, and Edward Miguel. 2012. “Reshaping Institutions: Evidence on Aid Impacts Using a Preanalysis Plan.” The Quarterly Journal of Economics 127 (4): 1755–1812.
© Center for Effective Global Action