Want to keep learning?

This content is taken from the University of California, Berkeley, Center for Effective Global Action (CEGA) & Berkeley Initiative for Transparency in the Social Sciences (BITSS)'s online course, Transparent and Open Social Science Research. Join the course to learn more.

Skip to 0 minutes and 0 seconds Why so little replication research? Let’s go through Hamermesh’s points and some others. The main thing he talks about is replication is not seen as a sort of high profile, high status activity in most professions. This is his quote. “First and most obviously, the profession puts a premium on the creativity and the generality of the idea. Not on verifying the breadth of its applicability.” So if you come up with a new theory, a new model, new data and you find a new result – you’re the hero. And then the replicator is sort of living off the dregs of that idea. That’s the very negative view in a lot of professions. Some people say this very openly.

Skip to 0 minutes and 37 seconds There are some critics of the push towards replication in psychology. Some very prominent scholars who basically say, “Yeah, you know, the replicators are all kind of low brow. They’re just shooting down the real scholars.” And things like that. And that’s a terrible thing to say because if there are errors in the original work, they have to be corrected. Like, it’s not enough to have a brilliant idea. If it doesn’t hold in the data, it’s not a real result. There is also something very democratizing and very egalitarian and empowering about anybody, regardless of status in the profession, being able to go and analyze data and find mistakes. You know, no one being sort of above scrutiny.

Skip to 1 minute and 16 seconds It creates accountability in the profession. Which is something we should think is good.

Skip to 1 minute and 22 seconds Okay. What are some other views? Why so little? Again, we’ve alluded to this a number of times. A lot of these disputes become bitter. It’s viewed as an attack. “Why are you attacking me personally?” Which is not a very positive way to go at it. It doesn’t lead to enlightenment. You’re not closer to Nirvana if you get really angry with people trying to replicate your work. And, I think, a lot of people just indulge this kind of combative pose on both sides. I think either way it’s wrong. Things should be – the ideal is to approach things in a sort of scientific way.

Skip to 1 minute and 58 seconds Of course, if you feel like the person you are replicating, or the person who is replicating you is being dishonest or misinterpreting things, then people start getting really upset. There is a real fear of retaliation when younger scholars take on senior scholars with replications. There is a real fear that somehow their papers won’t get published. That this person is an editor or referee and in the future on their work they will kind of crush them or whatnot. If that’s real, I’m not sure. Maybe it’s not. It’s a real fear. I’m not sure if it’s a real problem. People fear it. Whether that retaliation happens – I don’t know. But people are concerned about it. And that, again, stifles discussion.

Skip to 2 minutes and 38 seconds I’ve heard many times people say, “Yeah, you know, I looked at that data and those results don’t hold” or “that result isn’t robust. I grabbed that person’s data.” I’ve heard this from faculty. I’ve heard it from grad students. And they’re very reluctant to publish it. They’re like, “What’s the return to me going to be? “I’m going to end up in a blood feud with some senior scholar for ten years. “Or I can do my own stuff and kind of make my own name in something else.” So even if, from the point of view of science, it would be good to shoot down a result, there is a collective action problem.

Skip to 3 minutes and 8 seconds You are providing a public good by shooting down flawed science. You don’t get much of the return. In fact, it could be a negative for you. But if you do your own thing, you get the returns to doing your own thing. Okay, so why so little replication research? It’s really tedious and time consuming to get into the head of another researcher. And understand their data and understand all their analysis. It’s hard enough to do your own original research where you think through how to do it and you understand your thought process. But very often what replication authors think of as errors are just aspects that weren’t well documented initially.

Skip to 3 minutes and 44 seconds And very often when we go back and forth with potential replicating authors, a tremendous amount of time is just taken by saying, “Oh, God. My documentation wasn’t good, but no, this is why I’m doing it this way.” And they go, “Okay, that makes sense.” How can we promote replication? This was another interesting part of the Hamermesh article. He says, and this is like classic economics response to the problem, “We need more incentives for people to do replication. We should have leading scholars who have credibility in the field get a commission to write pieces.”

Skip to 4 minutes and 14 seconds So if we think there is an issue with the paper or whatnot, and we don’t want it to be one top scholar against a much less prominent scholar with all the dynamics that that provides – Not that that should be discouraged. That’s fine too. It wouldn’t be a bad thing to commission pieces. Maybe if there is a dispute amongst various scholars or there is a controversial article. What are some other ways? The other thing he says is, “Well, in general we need more editor demand for replication. We need editors to want to publish this stuff.” Because if editors are really eager to publish this stuff, they said that interest will create its own supply of replications. Which is totally true.

Skip to 4 minutes and 48 seconds If you knew you wrote a really good replication it would fly in the AER, you guys would leave right now. Like mid-lecture. You would be like off running something. Right? But how is that going to work? I mean, editors are maximizing journal citations. That’s their objective function. They want the most cited papers. If the field doesn’t like replications and doesn’t respect them and doesn’t cite them, editors aren’t going to do it. So I’ll talk about next how we may need broader change in cultural norms or norms among researchers to get here. Payment. Just pay people to do replication. And this is what Three IE has done. So Three IE stands for the International Initiative for Impact Evaluation.

Skip to 5 minutes and 31 seconds They were founded about seven or eight years ago with a lot of big foundation money

Skip to 5 minutes and 38 seconds among foundations working in international development. And a couple of years ago they started a replication program and they basically pay scholars. I think it’s $5,000, $10,000, small to medium amounts of money to do a replication of a list of prominent papers. That’s basically the approach. I think, in theory this is a good idea because it would sort of get people to do what they wouldn’t do otherwise. I think what you have to ask yourself is who is going to respond to a $5,000 incentive to do a replication? I think that’s the question kind of lurking in the background. And I think it may not motivate some of the leading scholars in the field. It’s not enough money.

Skip to 6 minutes and 20 seconds They’re going to be motivated by publication. If they were going to spend a few months working on this replication for $5,000 or a few months working on a new paper that has a chance in getting in the AER, they’re going to work on the paper that may get in the AER. So there’s a kind of selection that’s induced here that is worth thinking through more carefully. The other thing, and this is the soft side, but I think very important is in addition to the direct incentives, we have to change the perception of how this kind of research is viewed. We have to change norms and attitudes.

Skip to 6 minutes and 47 seconds It has to be the case where if someone replicates my work, I don’t get pissed off. I welcome it. Where editors say, “Hey, it’s valuable to show something was confirmed.” Like right now they’re only going to publish it if it contradicts a major result. In part, in terms of how to achieve this, maybe courses like this are part of it. Disseminating a different perspective on the scientific method and what our field’s practice should look like. That’s my hope. Maybe journals can change their practices. I do feel like these changes in norms and attitudes will reinforce all these demand side changes. People will be more responsive to these incentives if they see the activity as fundamentally useful.

Why is there so little replication and how can we promote more?

In the last video, we discussed how rare replications are in economics journals. In this video, we turn to possible reasons for this scarcity.

Share this video:

This video is from the free online course:

Transparent and Open Social Science Research

University of California, Berkeley