Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Skip to 0 minutes and 1 secondI want to get into the critique. We've got seven or eight minutes left. There's been this critique, and as I mentioned a number of the authors on the critique are actually folks whose studies we end up critiquing and reanalyzing. And then there are others I think who just joined in because they had a different perspective. They claim maybe there's lots of publication bias. I’ll just show you that maybe there's some, but it's not like the slope is zero. I mean there's clearly some upward sloping relationship. They also were critical about the fact that we didn't consistently focus on lagged effects. Basically in our meta-analysis we focused on whatever specification the original authors did. Sometimes they focused on contemporaneous, sometimes lagged.

Skip to 0 minutes and 37 secondsSometimes they focused on lagged, because this was an area with a strong agricultural cycle. So they thought lagged rainfall was most important, whatever. There's a little bit of an economist versus political scientist divide. A little bit here where I think some of the political scientists leading this effort including Buhaug himself, feel like we're combining things that are too different to be combined. That's kind of like a fundamental point. And they keep coming back to that. Like, no, you just can't combine civil war in Africa with communal riots in India. Okay, so we actually wrote up a response, and I just want to go through it quickly.

Skip to 1 minute and 12 secondsSo one thing that we did, and actually we did this before I was even fully familiar with Rosenthal and the file drawer problem, we took the most negative estimate among all the existing estimates. If you add one, what will it do to our estimated affect. If you add two, if you add three, if you add four. So if we missed half the literature – so instead of there being say 25 or 30 studies, there are really 50 or 60. And we missed half, and they're all really negative, like the most negative study in the whole literature. That would take us to about here on the graph. So the resulting meta-analysis estimate would still be really positive and significant.

Skip to 1 minute and 52 secondsLooking at the most negative study, you would need something like 80 missing studies. Basically it will have to be like 4 missing studies or 3 missing studies for every existing study, to basically have a non-significant effect. And to knock it all the way down to zero of course you’d need a ton. So our idea here wasn't like you know, anything more than to say, “Even if there are a dozen missing studies, we’ll be okay.” Now, we don't know how many missing studies there are. This can't really be definitive evidence. Maybe there really are 50 missing studies. Maybe 50 people did analyses they never published. We don't know. They do another thing and this is also where

Skip to 2 minutes and 33 secondsmeta-analysis can be tricky. They said, “Okay, now we're going to redo your meta-analysis, but we're going to ignore all the current year and current month estimates. We're only going to look at lags.” We said, “Look, we can look at lags.” They also said, “You know, maybe certain authors are cherry picking temperature versus precipitation.” So we said, “Look, we're going to kind of do what we did before for specification, and we're going to do it for explanatory variables.” Everywhere we can, we're going to look at temperature and precipitation, current and lagged. And create the sort of combined meta-analysis result. So we do that and we say, “Look, the contemporaneous temperature effect really is much bigger than the lagged effect.”

Skip to 3 minutes and 10 secondslike there’s a reason for doing it. But there's also a combined effect, like maybe there is some persistence. Same thing for precipitation. This is the Intergroup Conflict on the left. Same thing for Interpersonal on the right. You know there really is a contemporaneous effect to these things. But if you want to look at the combined effect, you can, and we do that. So what we present here is a variety of specifications that different people could prefer. So we've kind of gone down this road in the Meta-analysis of doing like lots of meta-analyses using different conditions. Sometimes it's what the author preferred. This has the largest sample. Sometimes we break it down for temperature precipitation where we can.

Skip to 3 minutes and 48 secondsAnd we produce the relevant mu’s and tau’s for all these cases. So if for whatever reason someone really cares about one of these estimates, they can go and find it basically. That was the goal. So again this says, maybe Buhaug et al. have a point on one dimension in saying like these studies are kind of different. There is a lot of variability in estimates across studies. But instead of us rejecting a whole literature, or the whole approach, our answer to them is to say, “Well, let's study that heterogeneity.” Let's try to understand where these differences in estimates are coming from.

Skip to 4 minutes and 16 secondsThat's a really interesting social science question, that wouldn't have been clarified if we hadn’t pooled the literature to begin with. There's another argument they make, and this is a statistical argument. This made us scratch our heads. And I think it's just, again, when you're working in an area that's an interdisciplinary area with lots of folks, with lots of different backgrounds floating around making lots of different arguments, it's actually kind of confusing. So their claim was, they reject the meta-analysis approach basically, because they don't think the studies can be compared. So they said, we're going to reject the meta-analysis approach.

Skip to 4 minutes and 51 secondsWhat we think is appropriate is you should look across the full distribution of the beta hats and look at the 95 percent distribution of the beta hats. They would say these results are so different. There's so much heterogeneity in the estimates, that if you really want to get a sense of whether there's a meaningful effect, look at the distribution of the beta hats. Don't pool the data at all. So it's like a complete rejection of there being a mu. Remember there was a mu and tau? Like they're all about tau. All they care about is that spread across estimates. But they're basically imposing no common effect. So, like, we're going to ignore the meta-analysis.

Skip to 5 minutes and 26 secondsWhat we think matters is the sort of distribution of the beta hats. Now this is a very problematic approach, because it's data invariant. So if you get more data and more and more studies with the same distribution, but again, more and more studies that have like an average effect over here. Like you could have 20 times more studies. But with the same broad distribution, they would still have the same confidence interval. So this kind of approach is, just goes against what we usually think of as standard statistical perspective.

A meta-analysis example: "Quantifying the Influence of Climate on Human Conflict" Part III

Our published meta-analysis was met with some critique both from authors of the studies included in the meta-analysis, as well as from other interested researchers. Their critiques pointed out issues of publication bias and a lack of focus on lagged effects. They also took issue with the meta-analysis combining effects from what they considered to be vastly different studies with different contexts. In this video, I discuss our response to critique.


Though Sol Hsiang, Marshall Burke, and I found remarkable convergence in findings from studies of climate change’s impact on human conflict, agreement was not universal. In “One effect to rule them all? A comment on climate and conflict,” Halvard Buhaug and 23 of his colleagues criticized our meta-analysis, arguing that it “suffer[ed] from shortcomings with respect to sample selection and analytical coherence.” Buhaug et al. pointed out three limitations to our meta-analysis:

  1. Cross-study independence – Buhaug et al. believe that our assumption of fully independent samples is problematic because there is actually considerable overlap. They claim that uncertainties in the climate effect are much larger than we reported.
  2. Causal homogeneity – They state that our sample studies cover a wide range of phenomena for climate and conflict, yet we “assume the same underlying climate effect across heterogenous studies” in order for the “meta-analysis to be meaningful.” They find this assumption to be unreasonable.
  3. Sample representativeness – They don’t think we chose a sample that is representative and claim that we use selection criteria to support our hypothesis.

Attempting to address some of these issues, Buhaug and his colleagues replicate the study and concluded that there was “no evidence of a convergence of findings on climate variability and civil conflict” and that any relationship was “statistically indistinguishable from zero.”

Hsiang, Burke, and I wrote a reply to this titled “Reconciling climate-conflict meta-analyses: reply to Buhaug et al.” In it, we asserted that Buhaug and his colleagues’ claim was false. It “misrepresent[ed] findings in the literature, ma[de] statistical errors, misclassifie[d] multiple studies, ma[de] coding errors, and suppresse[d] the display of results…consistent with our original analysis.” We responded to each of Buhaug et al.’s claims:

  1. If the results of related studies were actually correlated and not independent as we claimed, the “statistical uncertainty of [our] result would be understated, theoretically causing [our] statistically significant finding to be rendered insignificant.” This issue was already addressed in our original article. And taking it into consideration, as well as with an even higher correlation coefficient, our result remains “statistically significant with 95% confidence.”
  2. The claims of causal homogeneity are false. The technique we used – the Bayesian random effects approach – “explicitly assumes that effects across studies are not homogeneous even within the same class of conflict. In fact, the approach allows “different conflicts in different regions to respond differently to climate variables.”
  3. Instead of a disregard for “previously investigated climate-conflict associations,” we omitted only exact replications, to avoid double counting. But studies that “revisited prior relationships were included in the review and were used to interpret findings in the prior study.”

Regarding the accusation that we had a biased study selection process, we implemented a “stress test,” simulating inappropriate omission of results and found that we would have needed to be so biased to have omitted 80% of studies within the literature, an implausible scenario.

Finally, we point out the key errors in Buhaug et al.’s meta-analysis:

  1. They incorrectly used the range of raw data as a measure of uncertainty for the estimated mean of the data.
  2. They altered the code to analyze only the lagged effect of climate on conflict rather than on the contemporaneous effect of climate on conflict, as is the focus of both existing literature, and our original analysis.
  3. They changed the original code such that studies focusing on the effects of drought, or variables that explicitly include information about temperature, were no longer included in the temperature meta-analysis, causing the average effect of temperature to appear smaller. This generated inconsistency into our otherwise internally consistent approach.
  4. A coding error introduced by Buhaug et al. causes them to systematically drop the large temperature effect reported in one study by O’Loughlin et al. in the temperature meta-analysis. This error reduces the estimated average effect of temperature on conflict.
  5. They altered the original meta-analysis code in our paper in a way that prevented the display of the mean effect and its confidence interval in one of their figures. (see the gray panels showing the estimated distribution of raw data on the right-hand side of Fig 1.)

Fig. 1 "Modern empirical estimates for the effect of climate variability on civil conflict." from Buhaug et al.(Click to expand)

We conclude that all of these errors cause the meta-analysis to appear closer to zero and less statistically significant than they were in our original results.

Regardless of any of the comment’s authors’ mistakes or misconceptions, this kind of comment and reply process is a critical part of social science research. Researchers should feel comfortable attempting replications and confronting published authors about seemingly inappropriate assumptions of statistical errors. It is just one way the scientific community can work toward making our research and findings more credible.

What do you think of Buhaug’s comments? What about our response?

If you want to dive deeper into the material, you can read the entirety of each paper by clicking on the links in the SEE ALSO section at the bottom of this page.


References

Buhaug, H., J. Nordkvelle, T. Bernauer, T. Böhmelt, M. Brzoska, J. W. Busby, A. Ciccone, et al. 2014. “One Effect to Rule Them All? A Comment on Climate and Conflict.” Climatic Change 127 (3–4): 391–97. doi:10.1007/s10584-014-1266-1.

Hsiang, Solomon M., Marshall Burke, and Edward Miguel. “Reconciling climate-conflict meta-analyses: reply to Buhaug et al.” Climatic Change 127, no. 3-4 (2014): 399-405. doi:10.1007/s10584-014-1276-z.

Share this video:

This video is from the free online course:

Transparent and Open Social Science Research

University of California, Berkeley

Contact FutureLearn for Support