A paper on the "file drawer problem" by Franco, Malhotra, and Simonovits
In this article, authors Annie Franco, Neil Malhotra, and Gabor Simonovits leverage Time-sharing Experiments in the Social Sciences (TESS) to find evidence of publication bias and identify when it occurs during research.
The authors state that publication bias occurs when “publication of study results is based on the direction or significance of the findings.” In general, there is a greater chance of a study being published if it has statistically significant results. This practice of selective reporting produces what is known as the “file drawer” problem where there is a tendency to store away statistically insignificant results in file drawers. Franco, Malhotra, and Simonovits write that “failure to publish appears to be most strongly related to the authors’ perceptions that negative or null results are uninteresting and not worthy of further analysis or publication.”
Researchers have tried to address publication bias in the past by “replicat[ing] a meta-analysis with and without unpublished literature,” and “solely examin[ing] published literature and rely[ing] on assumptions about the distribution of unpublished research.” Each of these methods have their limits, so the authors chose instead to “examine the publication outcomes of a cohort of studies.” In this case, they examined the outcomes of TESS, a research program that proposes survey-based experiments and “submits proposals to peer review and distributes grants on a competitive basis.”
Franco, Malhotra, and Simonovits compared the statistical results of TESS experiments that were published to those that were not. The advantages of this strategy are that there are: -a known population of studies, -full accounting of what is published or not, -rigorous peer review for proposals with a quality threshold that must be met, and -the same high-quality survey research firm conducting all experiments.
[A concern with TESS is that it may not be completely representative of social science research.]
The analysis distinguished between two types of unpublished experiments: (1) those that were prepared for submission to a journal, and (2) and those that were never written up in the first place.
The authors also considered “whether the results of each experiment are described as statistically significant by their authors,” as it can be difficult to know the exact intentions of each author. This was important because each author’s percepetions influence how they present their data to readers.
Studies were classified in 3 ways: (1) Strong – all or most hypotheses were supported; (2) null – all or most hypotheses were not supported; or (3) mixed – representing the remainder.
They found that null studies were far less likely to be published. This can be problematic for two reasons: 1. Researchers may be wasting effort and resources conducting studies that have already been executed, but in which the treatments didn’t produce the desired result. 2. If future studies obtain statistically significant results that are published, it could falsely suggest stronger effects.
To promote transparency, Franco, Malhotra, and Simonovits suggest a better understanding of the motivations of researchers who choose to pursue projects based on the expected results. They also proposed the use of a “two-stage review (the first stage for the design and the second for the results), pre-analysis plans, and requirements to preregister studies should be complemented by incentives not to bury statistically insignificant results in file drawers. Creating high-status publication outlets for these studies could provide such incentives. The movement toward open-access journals may provide space for such articles. Further, the pre-analysis plans and registries themselves will increase researcher access to null results. Alternatively, funding agencies could impose costs on investigators who do not write up the results of funded studies. Last, resources should be deployed for replications of published studies if they are unrepresentative of conducted studies and more likely to report large effects”
What do you think? Which of these proposed actions or incentives would be easiest to implement or be the most effective?
You can read the whole paper by clicking on the link in the SEE ALSO section at the bottom of this page.
Franco, Annie, Neil Malhotra, and Gabor Simonovits. 2014. “Publication Bias in the Social Sciences: Unlocking the File Drawer.” Science 345 (6203): 1502–5. doi:10.1126/science.1255484.
© Center for Effective Global Action