Skip to 0 minutes and 14 seconds So that’s for a single study. But now to solve that problem of publication bias and P-hacking. So the solution we came up with is we’ll stop doing one study at a time. Let’s answer many questions in a single study. And so we picked as a proof of concept,depression, we identified 17 treatments that are frequently used to treat depression and actually it’s interesting the guidelines are not very specific on which one you should use. It’s just a doctor should just pick one depending on patient preferences, doctor preferences and well… So there’s really a need for for evidence in this place. And we came up… we made all possible comparisons of these seventeen treatments.
Skip to 0 minutes and 58 seconds So 17 times 17 minus in diagonal is 272 comparisons. We identified 22 outcomes of interest including stroke, insomnia, suicidal ideation, and some others. And we basically, within every comparison, we looked at all 22 outcomes, meaning that in total we had almost six thousand research questions that we want answers.
Skip to 1 minute and 26 seconds As I just showed you, you need different positive controls. So we identified negative controls, and we automatically generated positive controls from that. So we had 56,000 control questions.
Skip to 1 minute and 40 seconds And so we ran our our fancy machine with the large scale regularized regression on all of these research questions. That was quite a lot of computation. And we actually did this for four databases. And so in the end we had estimates for all of those, and we were able to do the empirical calibration and we came up with 6,000 calibrated estimates for every database that we use. So the end result looks like this. So every one of these dots is an empirically calibrated estimate for a comparison between two depression treatments for one of those 22 outcomes of interest.
Skip to 2 minutes and 24 seconds Of course, this graph looks very different from the one that we saw in literature. And the reason is obvious there because there is no publication bias here. Nobody is saying you can’t show this dot because it’s not specific statistical significant and there’s no P hacking. So nobody’s moving the dots over the line just to get it statistically significant. So I would say that not only do we have informational small and no effect sizes, we also don’t have P hacking and publication bias. And so we actually, just last week, we published this paper where we we explain this process.
Skip to 3 minutes and 7 seconds I really would invite you to go to this URL I’m gonna to leave it up for a couple of seconds. Because that has all the evidence that I just showed you, you can actually just browse through it. And click on every one of those points and see that each and every one of the estimates has the full information that you would expect. Like the covariate balance information, the information on the negative and positive controls etc. What’s… just figuring that out isn’t enough. We need a third step that’s actually now… now that we believe that we have a solution.
Skip to 3 minutes and 38 seconds We need to actually do it and I think a lot of time methodological researchers like me fall into this trap that we have a solution. We publish our paper and then that’s it. But no! We need to actually start doing this and so that’s where the large-scale evidence generation and evaluation in a network of databases or legend for short project comes in.
Methodology of Systematic Observational Research Process
Dr. Martijn Schuemie explains the methodology of solving that publication bias and P-hacking problem. They have made all possible comparisons of those seventeen treatments.
You can check the URL here: http://data.ohdsi.org/SystematicEvidence/ that he mentioned.