Skip to 0 minutes and 2 secondsSo now we're going to combine stuff. Again, there are doubts. Maybe the standardization is good or bad. There may be doubts about specification. But let's say we've done the best we can. There's always going to be some controversy on the margin. Now let's try to figure out what the bottom line estimate is. Let's say we have J studies and the estimated treatment effect is beta hat J for each one. We've got a sample size of N sub J. The simplest case is the case where all the studies were generated by the same process. There's the same underlying beta, same sigma. But the different studies just have different sample sizes.

Skip to 0 minutes and 43 secondsSo, this would be the case, for instance, if you had a lab. Think about a lab experiment, it’s kind of an easier case. Imagine there's a lab experiment, and you ran the same lab 10 times getting random draws of the population of whatever university sample you were getting. And sometimes you did it with 50 and sometimes you did it with 100 and whatnot. Like at the end of the day what you really just want to do is pool all your data. Right? And that's effectively what you would do if all the studies were generated by the same process.

Skip to 1 minute and 14 secondsYou’d effectively pool your data where the average effect you'd get would just be the mean of your estimated effects just weighting by the sample size. But that's the same as pooling all the data here. So, that's a kind of idealized case.

Skip to 1 minute and 45 secondsA more general case and more realistic is where there is a different degree of variability in the estimate for some inherent reason. Maybe the measures are different from one data series rather than another. Maybe the populations are different. There's just more variability in outcomes. Or, maybe you included different covariates, and that's going to affect how precise your estimates are. You have different access to different covariates. So now we're going to maintain the assumption that beta is same, but now the data is being generated in a slightly different way. We're again going to be combining estimates and meta-analysis, these different beta hats cross studies with weights, some weights Omega.

Skip to 2 minutes and 27 secondsAnd it's straightforward to show that the choice of Omega which minimizes the variance of the resulting estimator, this pooled beta tilde, the weights are going to be inverse precision weights. The more precise the estimate is; the more weight it's going to get. The less precise the estimate is; the less weight it’s going to get. And this is just the analog to the simple case where when you pool data, when you’ve lots of observations, you get more weight than when you have few observations. Well this generalizes to precision weighting.

Skip to 3 minutes and 2 secondsAnother approach to meta-analysis goes beyond the mean, and this is actually what a lot of meta analyses do. So imagine instead of just having a beta hat, there's some sort of range of possible estimates. So you know my best guess is my estimate of this individual study is here, but it's a pretty imprecisely estimated effect. So there's also some possibility that the true effect lies way out over here.

Skip to 3 minutes and 35 secondsSo each of these actually plots are distributions of predicted effects from different studies.

Skip to 3 minutes and 47 secondsOne thing that folks do in meta-analysis is you can combine these effects and sort of see where there is mass across these different studies. So actually, in this case this is actually the data that goes into the interpersonal violence case. There's a lot of studies with these small positive estimates. These are really precisely estimated studies, right. The range of estimates for these is very narrow. There's others that have larger effects that are quite imprecisely estimated out here. Just like you can wait and then sum up the beta hats, you can wait and sum up these distributions, to come up with a pooled distribution of effects. An even more general case allows the betas to differ.

Skip to 4 minutes and 37 secondsSo when we're thinking about what I've shown you before, if there's the same beta and there's variation across studies, why could that be the case? They have the same beta. It must be due to sampling variation. So you have a limited sample, you have a limited sample, I have a limited sample, our betas differ. That's just because of various types of sampling variation, random variation. If betas were the same, and as we all get the same, as we all grow in size, all of our three studies, and we get more and more. We should be getting closer and closer to the true beta, all of our estimates should be converging as well.

Skip to 5 minutes and 10 secondsSo the fact that there are literatures, we have really precise estimates of effects that differ from each other suggests it isn't just sampling variation. How much of the differences between our studies is due to sampling variation like pure noise, like the disturbance term? And how much of it is inherent differences in what we're estimating? Because you did rainfall, I did temperature. You looked in India, you looked in Africa. Like maybe these effects are just different. So imagine that instead of assuming there's one beta, imagine the beta J's are distributed with some mean miu. We still care about miu. This is like the average the effect.

Skip to 5 minutes and 44 secondsAs long as we buy that there is some commonality here, miu is the average effect across these studies, even if they have some differences. And then tau is what governs the variability across the beta J's. So this is like you know, how much variance is there in your random effect, or something like that. How meaningful is the random effect? So in this kind of approach which has this sort of hierarchical structure, where there's sort of mean of, this mean miu, and some variability across studies, these are called hyperparameters. And it’s kind of neat to carry out this kind of approach, or to think of things in this light.

Skip to 6 minutes and 27 secondsBecause if we can quantify tau, we get a sense of how much variability, we can quantify variability across studies. We can quantify heterogeneity. So instead of you just saying, “Oh there’s too much heterogeneity to pool things.” I can say, “Well this is exactly how much heterogeneity there is across studies.” And in the estimation I think the intuition is as studies become more and more precise, you're less and less able to explain differences through sampling variation, and more and more of those differences get loaded on to tau.

Skip to 6 minutes and 57 secondsWhereas if you have lots of really imprecise estimates and small samples and whatnot, well if you can account for a lot of the differences with sampling variation, there's really not much role for this sort of random effect, this study effect.

Skip to 7 minutes and 13 secondsHere are some of the meta-analysis results. And this is the Intergroup Conflict results. Civil war, rioting, insurgency, civil conflict. All those kinds of things. Red is temperature. These are temperature studies. The blue ones are rainfall shortfalls. The green ones are rainfall deviations. We came up with this amazing – this was all Sol, I think; This is incredible, a color coding. And so let me just explain what’s going on. Each of these is the estimate from a different study.

Skip to 7 minutes and 46 secondsThe Y axis is the change in percent per one standard deviation change in climate. These are like normalized effect. We have the zero line here.

Skip to 7 minutes and 56 secondsYou can see almost all of these estimates are above zero. All the red ones are above zero. That was the temperature result from before, and the whiskers here are the 95 percent confidence intervals. So some of these estimates are like really precise. This Hidalgo et al. on land invasions in Brazil has like tons of local data, and lots of variability in weather and really precise estimates. Some of these are really imprecise. Like the one all the way here on the left is a conflict study and you know with confidence intervals of 40 percent or something like that. So, there's a lot of variability there.

Skip to 8 minutes and 38 secondsAnd we present a couple of different ways to understand these results. So the first thing we present is the median effect which is this dashed line here. The median effect is across all these studies, what is the percent change for one standard deviation shock? In climate that's about 13 percent. That is my friends, is a very big shock. A two standard deviation climate shock happens once every few years or something. Once every five years or eight years or whatever it is. And so this is a big increase in conflict risk. We also have the mean estimate here. Now the mean estimate comes from the meta-analysis. The mean estimate comes from the inverse probability weighted meta-analysis.

Skip to 9 minutes and 20 secondsLike the straight up standard meta-analysis. And those results are presented over here in solid lines. So for the intergroup studies we don't have that by bimodality we had in the interpersonal violence. This white circle here is the average effect from the meta-analysis with the resulting confidence interval, 95 percent confidence interval. So you can see taking these couple dozen studies together. There's an average effect of 11 percent per one standard deviation shock, and it’s very significant. This is far away from zero. The dashed line here is the Bayesian hierarchical approach allowing for heterogeneity across studies. And it yields actually very similar relationship here.

Skip to 10 minutes and 9 secondsThen we just looked at the temperature studies. So some of the potential critiques of this approach is you know you're mixing temperature, rainfall, all kinds of things? Temperature is really easy to measure. A lot of the studies have temperature. From the point of view of future climate change, we actually understand the future consequences for temperature than many other climatic variables. Like almost all the climate models predict temperatures will increase two or three degrees Celsius at least. Whereas the effects on, say, rainfall patterns are much less certain. In some places rainfall will increase. In some it will decrease. In some there will be more variability. So in thinking about the future, temperature’s really natural to focus on.

Skip to 10 minutes and 47 secondsWe did the same thing for the temperature studies, and here we get slightly larger effects. Also significant and significantly different than zero, and slightly larger effects for temperature. These are the interpersonal violence studies, and you can see immediately why there are these two, these bimodalities. There's a few studies here that are U.S. crime studies using decades’ worth of data at the county level, for 3000 counties in the U.S. And they have these incredibly precise estimated effects, and they have somewhat different effects. So actually, the average effect in the simple meta-analysis lies in-between these two modes. These are again the sort of distribution weighted estimate, and this is the simple average from the meta-analysis.

Skip to 11 minutes and 46 secondsThen there's a bunch of studies from low income countries that have much less precisely estimated effect. Let me just get to publication bias. So we're going to regress log of the t-stat, and log of the degrees of freedom, and try to get a sense of what that slope looks like, that slope should be one. If the slope is zero or negative, then we’re really concerned. The overall slope here if we run the regression of log t-stat and log degrees of freedom is 0.32. It's positive, it's significant But it's not one. And it's not one, in part, because again there is some clustering of studies. You know you can kind of visually see some clustering of studies at t-stat of two.

Skip to 12 minutes and 29 secondsSo there’s probably some publication bias in this literature.

Skip to 12 minutes and 34 secondsNow figuring out how close we need to get to one to convince ourselves there's no publication bias, I don't know. But the other thing that comes out is in some of the rainfall studies, there's an even weaker slope, and actually a bunch of the – I think the triangles are the temperature studies, and the circles are the non-temperature studies. So in these sort of rainfall studies, the slope is lower, there's a lot of clustering around two.

Skip to 13 minutes and 2 secondsIn the temperature studies, less so. So the slope is steeper for the temperature studies. And you just see lots of well-powered studies with big samples and big t-statistics over here.

# A meta-analysis example: "Quantifying the Influence of Climate on Human Conflict" Part II

Professor Miguel’s discussion of the meta-analysis continues in this video. Here, we’ll dive deeper into the authors’ methods, focusing on how they weighted different effects based on their precision and how they quantified heterogeneity across the studies. We’ll also learn more about the analysis’s results, as well as how the authors determined if there was any publication bias influencing the studies’ effects.

NOTE: This video includes some advanced material regarding the paper’s statistical analysis, so feel free to skip from 1:30 to 7:10 if you’re less comfortable with this kind of material or more interested in the bigger picture implications of the analysis.

Also NOTE: The chart at 7:09 may be too detailed to be read clearly on some devices. If this is your case, you can find a PDF version of the chart in the SEE ALSO section at the bottom of this page.

Read our full paper here: http://www.bitss.org/wp-content/uploads/2014/06/quantifying-the-inefac82uence-of-climate-on-human-conefac82ict1.pdf on BITSS.org.

**Reference**

Hsiang, Solomon M., Marshall Burke, and Edward Miguel. 2013. “Quantifying the Influence of Climate on Human Conflict.” Science 341 (6151): 1235367. doi:10.1126/science.1235367.

© Center for Effective Global Action