Want to keep learning?

This content is taken from the University of California, Berkeley, Center for Effective Global Action (CEGA) & Berkeley Initiative for Transparency in the Social Sciences (BITSS)'s online course, Transparent and Open Social Science Research. Join the course to learn more.

"Why most published research findings are false" by John Ioannidis

In this article, Dr. John Ioannidis lays out a framework for demonstrating:

  • the probability that research findings are false,
  • the number of findings in a given research field that are valid,
  • how different biases affect the outcomes of research, and
  • what can be done to reduce error and bias.

Ioannidis first defines bias as “the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced,” and goes on to say that “bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias”. With increasing bias, the chances of findings being accurate decreases. And, while they are often the subjects of counter-arguments, measurement errors and inefficient use of data are less likely now with more advanced technology.

Ioannidis goes on to list corollaries about the probability that a research finding is indeed true:

Corollary 1: “The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.” Research findings are more likely to be true with larger studies such as randomized controlled trials.

Corollary 2: “The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.” An example of a large effect that is useful and likely true is the impact of smoking on cancer or cardiovascular disease. This is more reliable than small postulated effects like genetic risk factors. Very small effect sizes can lead to false positive claims.

Corollary 3: “The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.” Ioannidis explains that the post-study probability that a finding is true, depends a lot on the pre-study odds.

Corollary 4: “The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.” “Flexibility”, Ioannidis tells us, “increases the potential for transforming what would be ‘negative’ results into ‘positive’ results”. To combat this, efforts have been made towards conduct standardization and reporting with the belief that adherence to these standards increases true findings. True findings may also be more common when the outcomes and fields are universally agreed upon, whereas experimental analytical methods may be subject to bias and selective outcome reporting.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest may be inadequately reported and may increase bias. Prejudice may arise due to a scientist’s belief or commitment to a theory or their own work. Additionally, some research is conducted out of self-interest to give researchers qualifications for promotion or tenure. These can all distort results and can be unreliable.

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. When there are many players involved, getting ahead of the competition may become the priority and can lead to rushed experiments. Additionally, when teams focus on publishing “positive” results, others may want to respond by finding “negative” results to disprove them. What results then, is something called the Proteus phenomenon, which describes the rapidly alternating extreme research claims and opposite refutations.

While the extent of false and biased research findings may seem a harsh reality, the situation can be improved with larger studies for which the pre-study probability of a research question[’s result] is already high. Large-scale evidence is helpful when it tests major concepts instead of specific questions. It is also beneficial to emphasize the totality of evidence, enhance research standards, and curtailing prejudice.

Finally, Ioannidis suggests that, instead of only chasing statistical significance, researchers should focus on pre-study odds and understanding the implications of true versus non-true relationships.

After reading this, what are your reactions? Are you surprised? How, if at all, does this change your perception about research in general?

Read the full essay here: http://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.0020124&type=printable/ on PLOS.org.


Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8): e124. doi:10.1371/journal.pmed.0020124.

Share this article:

This article is from the free online course:

Transparent and Open Social Science Research

University of California, Berkeley