A paper on publication bias in political science journals by Gerber and Malhotra
This article examines two prestigious journals for publication bias caused by a “reliance on the 0.05 significance level.” Authors Alan Gerber and Neil Malhotra define publication bias as “the outcome that occurs when, for whatever reason, publication practices lead to bias in the published parameter estimates.”
The authors list four ways that bias can occur:
Editors and reviewers may prefer significant results and reject methodologically sound articles that do not achieve certain statistical significance thresholds.
Scholars may only submit studies with statistically significant results to journals and place the rest in “file drawers.”
Investigators may adjust sample sizes after observing that results narrowly fail tests of significance.
Researchers may engage in data mining to find model specifications and sub-samples that achieve significance thresholds, or in the continuous collection of data until statistical significance surpasses the 95% threshold.
To detect publication bias, Gerber and Malhotra conducted a “caliper test,” in two leading political science journals, looking at the number of published results for critical values just above and below a cut-off. Because sampling distributions should reflect continuous probability distributions, the values just above and just below an arbitrary cut-off should be the same.
Their results showed a drastic spike in published results when critical values were just above the threshold. As they increased the threshold size, the number of findings just above and below it decreased. They concluded that many of the findings in these journals could be false due to bias.
Gerber and Malhotra discuss the consequences of publication bias:
“First, publication bias may result in a significant understatement of the chance of a Type I error, which lends false confidence and may misdirect subsequent research. Second, anticipation of journal practices may distort how studies are conducted, encouraging data mining, specification searches, and post hoc sample size adjustments. Third, and perhaps most important, holding work to the arbitrary standard of p < 0.05 may discourage the pursuit and publication of work that is well designed and on important topics but unlikely to produce precisely measured estimates”.
There is value in well-designed, robust, innovative studies, even if the power of the study is weak. Gerber and Malhotra propose that, along with greater attention to research design, study registries should be implemented to reduce publication bias. We’ll learn more about these registries next week.
Read the full article here.
“We examine the APSR and the AJPS for the presence of publication bias due to reliance on the 0.05 significance level. Our analysis employs a broad interpretation of publication bias, which we define as the outcome that occurs when, for whatever reason, publication practices lead to bias in the published parameter estimates. We examine the effect of the 0.05 significance level on the pattern of published findings using a “caliper” test, a novel method for comparing studies with heterogeneous effects, and find that we can reject the hypothesis of no publication bias at the 1 in 32 billion level. Our findings therefore raise the possibility that the results reported in the leading political science journals may be misleading due to publication bias. We also discuss some of the reasons for publication bias and propose reforms to reduce its impact on research.”
Gerber, Alan, and Neil Malhotra. 2008. “Do Statistical Reporting Standards Affect What Is Published? Publication Bias in Two Leading Political Science Journals.” Quarterly Journal of Political Science 3 (3): 313–26. doi:10.1561/100.00008024.
© Center for Effective Global Action