A comment on our meta-analysis by Buhaug and colleagues (and our reply)
Though Sol Hsiang, Marshall Burke, and I found remarkable convergence in findings from studies of climate change’s impact on human conflict, agreement was not universal. In “One effect to rule them all? A comment on climate and conflict,” Halvard Buhaug and 23 of his colleagues criticized our meta-analysis, arguing that it “suffer[ed] from shortcomings with respect to sample selection and analytical coherence.” Buhaug et al. pointed out three limitations to our meta-analysis:
- Cross-study independence – Buhaug et al. believe that our assumption of fully independent samples is problematic because there is actually considerable overlap. They claim that uncertainties in the climate effect are much larger than we reported.
- Causal homogeneity – They state that our sample studies cover a wide range of phenomena for climate and conflict, yet we “assume the same underlying climate effect across heterogenous studies” in order for the “meta-analysis to be meaningful.” They find this assumption to be unreasonable.
- Sample representativeness – They don’t think we chose a sample that is representative and claim that we use selection criteria to support our hypothesis.
Attempting to address some of these issues, Buhaug and his colleagues replicate the study and concluded that there was “no evidence of a convergence of findings on climate variability and civil conflict” and that any relationship was “statistically indistinguishable from zero.”
Hsiang, Burke, and I wrote a reply to this titled “Reconciling climate-conflict meta-analyses: reply to Buhaug et al.” In it, we asserted that Buhaug and his colleagues’ claim was false. It “misrepresent[ed] findings in the literature, ma[de] statistical errors, misclassifie[d] multiple studies, ma[de] coding errors, and suppresse[d] the display of results…consistent with our original analysis.” We responded to each of Buhaug et al.’s claims:
- If the results of related studies were actually correlated and not independent as we claimed, the “statistical uncertainty of [our] result would be understated, theoretically causing [our] statistically significant finding to be rendered insignificant.” This issue was already addressed in our original article. And taking it into consideration, as well as with an even higher correlation coefficient, our result remains “statistically significant with 95% confidence.”
- The claims of causal homogeneity are false. The technique we used – the Bayesian random effects approach – “explicitly assumes that effects across studies are not homogeneous even within the same class of conflict. In fact, the approach allows “different conflicts in different regions to respond differently to climate variables.”
- Instead of a disregard for “previously investigated climate-conflict associations,” we omitted only exact replications, to avoid double counting. But studies that “revisited prior relationships were included in the review and were used to interpret findings in the prior study.”
Regarding the accusation that we had a biased study selection process, we implemented a “stress test,” simulating inappropriate omission of results and found that we would have needed to be so biased to have omitted 80% of studies within the literature, an implausible scenario.
Finally, we point out the key errors in Buhaug et al.’s meta-analysis:
- They incorrectly used the range of raw data as a measure of uncertainty for the estimated mean of the data.
- They altered the code to analyze only the lagged effect of climate on conflict rather than on the contemporaneous effect of climate on conflict, as is the focus of both existing literature, and our original analysis.
- They changed the original code such that studies focusing on the effects of drought, or variables that explicitly include information about temperature, were no longer included in the temperature meta-analysis, causing the average effect of temperature to appear smaller. This generated inconsistency into our otherwise internally consistent approach.
- A coding error introduced by Buhaug et al. causes them to systematically drop the large temperature effect reported in one study by O’Loughlin et al. in the temperature meta-analysis. This error reduces the estimated average effect of temperature on conflict.
- They altered the original meta-analysis code in our paper in a way that prevented the display of the mean effect and its confidence interval in one of their figures. (see the gray panels showing the estimated distribution of raw data on the right-hand side of Fig 1.)
We conclude that all of these errors cause the meta-analysis to appear closer to zero and less statistically significant than they were in our original results.
Regardless of any of the comment’s authors’ mistakes or misconceptions, this kind of comment and reply process is a critical part of social science research. Researchers should feel comfortable attempting replications and confronting published authors about seemingly inappropriate assumptions of statistical errors. It is just one way the scientific community can work toward making our research and findings more credible.
What do you think of Buhaug’s comments? What can you say about our response?
If you want to dive deeper into the material, you can read the entirety of each paper by clicking on the links in the SEE ALSO section at the bottom of this page.
Buhaug, H., J. Nordkvelle, T. Bernauer, T. Böhmelt, M. Brzoska, J. W. Busby, A. Ciccone, et al. 2014. “One Effect to Rule Them All? A Comment on Climate and Conflict.” Climatic Change 127 (3–4): 391–97. doi:10.1007/s10584-014-1266-1.
Hsiang, Solomon M., Marshall Burke, and Edward Miguel. “Reconciling climate-conflict meta-analyses: reply to Buhaug et al.” Climatic Change 127, no. 3-4 (2014): 399-405. doi:10.1007/s10584-014-1276-z.
© Center for Effective Global Action