Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Skip to 0 minutes and 1 secondI want to get into the critique. We've got seven or eight minutes left. There's been this critique, and as I mentioned a number of the authors on the critique are actually folks whose studies we end up critiquing and reanalyzing. And then there are others I think who just joined in because they had a different perspective. They claim maybe there's lots of publication bias. I’ll just show you that maybe there's some, but it's not like the slope is zero. I mean there's clearly some upward sloping relationship. They also were critical about the fact that we didn't consistently focus on lagged effects. Basically in our meta-analysis we focused on whatever specification the original authors did. Sometimes they focused on contemporaneous, sometimes lagged.

Skip to 0 minutes and 37 secondsSometimes they focused on lagged, because this was an area with a strong agricultural cycle. So they thought lagged rainfall was most important, whatever. There's a little bit of an economist versus political scientist divide. A little bit here where I think some of the political scientists leading this effort including Buhaug himself, feel like we're combining things that are too different to be combined. That's kind of like a fundamental point. And they keep coming back to that. Like, no, you just can't combine civil war in Africa with communal riots in India. Okay, so we actually wrote up a response, and I just want to go through it quickly.

Skip to 1 minute and 12 secondsSo one thing that we did, and actually we did this before I was even fully familiar with Rosenthal and the file drawer problem, we took the most negative estimate among all the existing estimates. If you add one, what will it do to our estimated affect. If you add two, if you add three, if you add four. So if we missed half the literature – so instead of there being say 25 or 30 studies, there are really 50 or 60. And we missed half, and they're all really negative, like the most negative study in the whole literature. That would take us to about here on the graph. So the resulting meta-analysis estimate would still be really positive and significant.

Skip to 1 minute and 52 secondsLooking at the most negative study, you would need something like 80 missing studies. Basically it will have to be like 4 missing studies or 3 missing studies for every existing study, to basically have a non-significant effect. And to knock it all the way down to zero of course you’d need a ton. So our idea here wasn't like you know, anything more than to say, “Even if there are a dozen missing studies, we’ll be okay.” Now, we don't know how many missing studies there are. This can't really be definitive evidence. Maybe there really are 50 missing studies. Maybe 50 people did analyses they never published. We don't know. They do another thing and this is also where

Skip to 2 minutes and 33 secondsmeta-analysis can be tricky. They said, “Okay, now we're going to redo your meta-analysis, but we're going to ignore all the current year and current month estimates. We're only going to look at lags.” We said, “Look, we can look at lags.” They also said, “You know, maybe certain authors are cherry picking temperature versus precipitation.” So we said, “Look, we're going to kind of do what we did before for specification, and we're going to do it for explanatory variables.” Everywhere we can, we're going to look at temperature and precipitation, current and lagged. And create the sort of combined meta-analysis result. So we do that and we say, “Look, the contemporaneous temperature effect really is much bigger than the lagged effect.”

Skip to 3 minutes and 10 secondslike there’s a reason for doing it. But there's also a combined effect, like maybe there is some persistence. Same thing for precipitation. This is the Intergroup Conflict on the left. Same thing for Interpersonal on the right. You know there really is a contemporaneous effect to these things. But if you want to look at the combined effect, you can, and we do that. So what we present here is a variety of specifications that different people could prefer. So we've kind of gone down this road in the Meta-analysis of doing like lots of meta-analyses using different conditions. Sometimes it's what the author preferred. This has the largest sample. Sometimes we break it down for temperature precipitation where we can.

Skip to 3 minutes and 48 secondsAnd we produce the relevant mu’s and tau’s for all these cases. So if for whatever reason someone really cares about one of these estimates, they can go and find it basically. That was the goal. So again this says, maybe Buhaug et al. have a point on one dimension in saying like these studies are kind of different. There is a lot of variability in estimates across studies. But instead of us rejecting a whole literature, or the whole approach, our answer to them is to say, “Well, let's study that heterogeneity.” Let's try to understand where these differences in estimates are coming from.

Skip to 4 minutes and 16 secondsThat's a really interesting social science question, that wouldn't have been clarified if we hadn’t pooled the literature to begin with. There's another argument they make, and this is a statistical argument. This made us scratch our heads. And I think it's just, again, when you're working in an area that's an interdisciplinary area with lots of folks, with lots of different backgrounds floating around making lots of different arguments, it's actually kind of confusing. So their claim was, they reject the meta-analysis approach basically, because they don't think the studies can be compared. So they said, we're going to reject the meta-analysis approach.

Skip to 4 minutes and 51 secondsWhat we think is appropriate is you should look across the full distribution of the beta hats and look at the 95 percent distribution of the beta hats. They would say these results are so different. There's so much heterogeneity in the estimates, that if you really want to get a sense of whether there's a meaningful effect, look at the distribution of the beta hats. Don't pool the data at all. So it's like a complete rejection of there being a mu. Remember there was a mu and tau? Like they're all about tau. All they care about is that spread across estimates. But they're basically imposing no common effect. So, like, we're going to ignore the meta-analysis.

Skip to 5 minutes and 26 secondsWhat we think matters is the sort of distribution of the beta hats. Now this is a very problematic approach, because it's data invariant. So if you get more data and more and more studies with the same distribution, but again, more and more studies that have like an average effect over here. Like you could have 20 times more studies. But with the same broad distribution, they would still have the same confidence interval. So this kind of approach is, just goes against what we usually think of as standard statistical perspective.

A meta-analysis example: "Quantifying the Influence of Climate on Human Conflict" Part III

The published meta-analysis was met with some critique both from authors of the studies included in the meta-analysis, as well as from other interested researchers. Their critique pointed out issues of publication bias and a lack of focus on lagged effects. They also took issue with the meta-analysis combining effects from what they considered to be vastly different studies with different contexts. In this video, professor Miguel discusses the authors’ response to this critique.

Share this video:

This video is from the free online course:

Transparent and Open Social Science Research

University of California, Berkeley

Contact FutureLearn for Support