Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

Publication Bias and P-hacking

video
14.2
So let me demonstrate how you can use these negative controls to evaluate a specific study design.
20.8
So going back to that paper that I showed you earlier: “studying isotretinoin and inflammatory bowel disease.” The estimate that the odds ratio that they reported
30.3
in that paper is shown here: odds ratio four point three six. And applauding it in this graph, and I’m gonna go back to this graph a little bit more later on, so I need to explain it very carefully. Onthe x-axis is the odds ratio to the effect size, and under y-axis is the standard error which is basically the width of the confidence interval. And so the higher that you are, the wider the confidence interval, usually that means you have less data. More data you go down, less data you go up. So nice way of plotting it like that is you can draw this dashed line.
64.9
And that dashed line represents the region where B equals 0.05, and so everything below the dash line is statistically significant with a p-value smaller than 0.05. Of course, the odds ratio that this paper reported is statistically significant because it’s below the dashed line. Now we wanted to evaluate this design. But, of course, we didn’t have their analysis code. We didn’t have their data. So we try to replicate it as best as we could. We also have a U.S. health insurance database that we can use. And so I went through the paper line by line and implemented a design that I thought was exactly what they did. And I came up with this odds ratio. Our database was bigger than theirs.
110.1
So that’s why the standard error is smaller. But as you can see is actually pretty spot-on and our confidence interval is exactly within there. So I think we did, I did a pretty good job of replicating that. The reason why I did what that was of course that I want to add negative controls. So we came up with roughly 50 negative controls. So these are drugs that anybody that would look at them would be pretty readable that, yeah, that can cause inflammatory bowel disease. And we ran the same design on the same data for those fifty negative controls.
149.2
So each one of those blue dots is an estimate an odds ratio estimate for one of these negative controls, and remember, we all believe that the true odds ratio should be one. So why is this method? Or this specific design saying that there are lots of odds ratios greater than one? And actually, almost all of them are below the dashed line, meaning almost all of them are statistically significant Things like confounding, like we’re comparing cases to control. People who have inflammatory bowel disease to people who don’t have inflammatory bowel disease. Well there lots of differences between those people, except, not just the fact that they’d have the disease or not.
190.7
And so this method just does a bad job of adjusting for that component.
198.8
So that’s that’s pretty damning. Especially as you can see that the estimate that we had for istretinoin is well within this cloud of negative controls. So if you think about a p-value, it tells you … And I reject the null hypothesis, well, in this case, well, it’s in the middle of things where the null hypothesis is true. So it actually can’t, and so we we came up with a way of formalizing that. And computing a calibrated p-value which takes into account this information has learned from the negative controls. And we can actually draw this orange area which is where B is smaller than 0.05 after empirical calibration.
238.3
So you see, as you would expect, that this estimate that we had is no longer statistically significant after empirical calibration. Probably a lot of you’re already thinking now “hey, this guy was foreigner right? He’s making a bad effect, Go away.” So just to be clear I’m not saying that isotretinoin doesn’t cause inflammatory bowel disease. Actually we don’t know. All I’m saying is that this specific design, as used on this data, cannot tell you one way or the other. So we can’t reject the null that doesn’t mean that the null is true. It’s just that we can learn from this with a specific study.
281.4
I wrote a paper about this process already a while back. So this is about one study, but there is actually another problem that I want to highlight. If we think about “how does a study happened?” Well, we start with an idea for a study. We then perform the study and I just showed you that that can be problematic and how bad that is. But then we submit the paper for publication and hopefully get it published. And we end up hopefully in PubMed.
315.1
But how well does that whole process work? That’s a little bit more tricky. We can’t really use a good gold standard there. But we can just look at the output, like what ends up in PubMed?
330.1
So what I’m showing in this plot is. What I did is, I went through PubMed, I extracted all the papers from the observational study using a database, like the Taiwanese database, but, of course, also CPRD from the UK or other observational databases. And from those PubMed abstracts, I just extracted automatically all of the effect size estimates that were reported including the p-values and confidence intervals. So every one of these dots is one of those estimates that we extracted from the literature. Of course, this is about all sorts of different questions, different exposures, different outcomes, all sorts of different designs that were used on the data.
373.4
But despite the fact that this is really a lot of different things in one plot. We can see a very clear pattern I would argue. The pattern pretty obviously is that there’s this big gap in the middle. Then do not see many articles that talk about an effect size estimate that’s not statistically significant. Now this could be that researchers are just really good at picking a research question and they only pick the ones that they know beforehand will be statistically significant. But then still I would argue, well, I would like to know about all those things that are not statistically significant. This might be important to me as well, right?
415
I mean I want to know whether when something is safe. But there’s also there’s very suspicious sharp boundary at that, at these point of five. And that actually tells me, you know, it can’t just be that they’re good at picking it up questions. You know, it’s also, there’s a lot of publication bias
435
meaning that people just only publish things that are statistically significant. Maybe because reviewers reject everything else. But also something called P-hacking where you do a study, you find the results are not statistically significant. You then go back and basically rethink with the design of your study and until you actually get a statistically significant result. And both of those are actually bad. Because of the multiple testing that you’re doing but not actually reporting. You get a lot of false positives. And so in summary about this seeing how well things work, I would say the performance of this observational research at an individual study level is pretty bad.
478.7
Because of the bias that we have, due to confounding selection bias and measurement error. But as a whole, it’s even worse. Because of publication bias and P-hacking.

How to evaluate a study design? Dr. Martijn Schuemie shows the result of this study, isotretinoin on a specific outcome, inflammatory bowel disease in g a graph. He explains that he wants to evaluate this design so he did a replicate similar experiment to examine the result.

He also explains the concept of P-hacking and publication bias. P-hacking is usually done through manipulation for a certain purpose. The manipulation could include the timing to stop collecting data, decide whether collected data will be transformed or not, which statistical tests and parameters will be used, etc.

If you want to learn more about P-hacking, click here.

You can click on the related links to see the research results in the final slide, Interpreting observational studies: why empirical calibration is needed to correct p‐values. We will continue to hear from Dr. Martijn Schuemie to explain step 2 for 21st-century development.

This article is from the free online

AI and Big Data in Global Health Improvement

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now