Contact FutureLearn for Support
Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Skip to 0 minutes and 0 secondsLet's talk about Open Data and detecting fraudulent research. This is like a real morality tale. We're going to talk about these cases of fraud just discovered in the last few years in psychology. There’s been a bunch of them. In lecture one, we talked about some of them. But basically, Uri Simonsohn detected fraud in just bunches of papers in psychology using pretty simple statistics. We're going to talk about those statistics. And actually you could use them to try to figure out if there are funny patterns in other papers as well.

Skip to 0 minutes and 35 secondsThe promise of open data is not just that we’ll discover existing fraud but will deter future fraud. Like if people know that there's researchers out there that are going to use these techniques, then they're going to be hesitant about committing fraud. So, what was kind of neat about this paper, there's a lot of really famous cases of famous scientists who made up all their data. And it's really kind of scandalous. I remember being in grade school, in middle school and learning about Gregor Mendel. So everybody knows Gregor Mendel, famous geneticist. Came up with all these ideas about dominant and recessive genes, and tested them with these pea pods in his little monastery.

Skip to 1 minute and 12 secondsHe came up with these theories that actually have been proven to be correct. Like actually if you go and test what he said he was testing, you’ll find proof for his theories. He was totally right about like smooth peas and the wrinkly peas and the green peas and the yellow peas And in fact if you have this sort of like heterozygous matches of peas, you know, dominant recessive mix, you're going to get a ratio of dominant to recess phenotypes, visible characteristics in the next generation of three to one. Okay, so that turned out to be true in the data. So, it's kind of amazing. He was a great genetic theorist. He also made up, it seems, all of his data.

Skip to 1 minute and 58 secondsSo if you actually look at his data, the ratio, the three to one ratio in his samples was like, it wasn't exactly three to one. Like he knew he couldn't literally make it 75/25, but he'd make it 76/24, 74/26. And so you get this data and it was like, “Oh yeah there's a little bit of sampling variation.” It turns out there was just way too little sampling variation. So as far back as 1936, the famous statistician Fisher, who was also a very important biologist, he helped synthesize a lot of the ideas of Mendel and Darwin and emerging biostatistics into a field. So he was an amazingly important statistician.

Skip to 2 minutes and 41 secondsWent back and looked at Mendel's old data and was like, “Oh, there's way too little sampling variation here.” And he wrote a piece pointing this out. This has been a big controversy that's raised ever since then. But basically he just carried out some standard tests and said the odds you’d get as little sampling variation as Mendel did is about 7 in 100,000. So like p of .00000. You can reject that it's real data with incredible confidence. “Bias seems to pervade the whole of the data.” He wrote in a private letter to a colleague. And he was traumatized by this, because he was a big follower of Mendelian theory and he was sort of formalizing a lot of the statistics.

Skip to 3 minutes and 22 secondsSo, when he wrote up his piece, he was very clear about the fact that he thought it was fictitious data. And he has these great quotes about, you know, “fictitious data can seldom survive scrutiny.” Because most people underestimate the frequency of large deviations in data. They're going to kind of tend towards the expectation in the data. And I put this up first because in the multiple cases of fraud we're going to talk about today, fraud always looks like this in the cases that have been discovered. It's always researchers making up data that looks too perfect, too clean, and too little sampling variation. And so this is just an interesting thing that Fischer observed a long time ago.

Skip to 4 minutes and 3 secondsBasically genuine data looks sort of messier. And another thing that's an interesting side point in this, this written article, Fisher refuses to actually outright say that Mendel was a fraud. And he keeps saying, and for those of you who are graduate students and maybe research assistants or have been in the past, you'll be very upset. He goes, “there must have been an unscrupulous research assistant.” There must have been a young monk, out in the fields, collecting the peas who knew what Sir Mendel really wanted and doctored the data. There's no evidence such a person existed. It’s just he couldn't get himself to sort of accuse Mendel. So it's pretty interesting. The psychology is kind of interesting.

Skip to 4 minutes and 48 secondsAnd of course, Mendel was a giant of genetic theory. An incredibly important intellectual. But, and he was right, it turns out. Because people replicated his experiments, but his original data was made up, it looks like. There's other famous cases, and again I just wanted to – these are ones that were mentioned very briefly in Uri's piece, so I just wanted to mention this. There's a very famous psychologist, British psychologist in the last century, Sir Bert, who developed a lot of modern IQ tests, who carried out together with Galton some early psychometric assessments of the UK Population. And he was also one of the people who came up with twin studies.

Skip to 5 minutes and 28 secondsStudies of identical twins to try to separate out nature and nurture. And as we all know there's been tons of those follow on studies. And actually it's again really useful research design. He came up with really important measures like very important intellectual. But in a bunch of his studies trying to correlate IQ scores across identical twins versus non-identical twins. There were patterns in the data that just didn't seem to make any sense. So, as he kept accumulating data – and again no one knows if it was even real data – the correlation coefficient on the IQ scores of these identical twins remained 0.771. as he went from 20 to 40 to 60. And that's possible. It's just really, really unlikely.

Skip to 6 minutes and 11 secondsSo shortly after his death critics pounced on this, and started criticizing him. It turns out there had been orders to burn all of his papers right after his death. So no one was able to actually check the original data. All kinds of weird things. There were co-authors on some of the projects who were never identified. He was claiming all these people were helping him collect the data. No never have ever met these people. Some of them served as reviewers on his books though. So maybe he was reviewing his own book. So there's just, it's kind of sad like eminent intellectuals who had done this.

Skip to 6 minutes and 46 secondsOkay, so that brings us back to Uri Simonsohn and this tradition of looking for data that's too good, that looks too good to be true. And the motivation is a feeling that was pervasive in social psychology, empirical social psychology a few years ago. That a lot of empirical work was dodgy. And we talked about this. We’ve talked about p-hacking and we've talked about publication bias and selective presentation of results and dropping observations and all those other things. But even beyond that there were a subset of social psychologists literally just making up data. So in the first lecture of the term we mentioned this case of this professor Stapel in Netherland’s who made up all this data.

Skip to 7 minutes and 29 secondsHe was a really famous social psychologist. He got busted in 2011. But even after that case Uri suspected work by a couple of other tenured, prominent scholars, one at the University of Michigan, Lawrence Sanna and Dirk Smeesters, also in the Netherlands. And basically, it's interesting. Uri was looking at a volume of a leading journal in social psychology, Journal of Experimental Social Psychology. And there were a couple papers in 2011 by these authors where the summary statistics looked strange. And they looked strange in a very particular way to Uri. There was too little variation across treatment arms in important statistics. And he said, “Well what are the odds that there'd be so little variance?”

Skip to 8 minutes and 18 secondsAnd that's what sort of started this analysis. So he basically does just what Mendel, what the Mendel critique by Fisher did, and he comes up with the likelihood that you’d get certain patterns in the data by natural variation, by random sampling, or random variation alone. And he can reject random variation. So that's, that's the idea. So we're going to go through the statistics. We are going to talk about these particular studies. And then the second half of the lecture we’ll go into some of the other discussions of benefits of open data.

Introduction to open data

Documented high-profile cases of fraud in scientific research go back as far as the 19th century to Gregor Mendel, widely considered the father of modern genetics, and more recently to Sir Cyril Burt, an educational psychologist whose studies of IQ in twins have greatly influenced the continuing debate of nature versus nurture. In this video, Professor Miguel discusses how this fraud was discovered, as well as more recent detections of fraud in Psychology literature. The next video will delve deeper into one specific case of fraud discovered by Dr. Uri Simonsohn.

Share this video:

This video is from the free online course:

Transparent and Open Social Science Research

University of California, Berkeley