# Why Most Published Research Findings Are False

"Why Most Published Research Findings Are False"[1] is a 2005 research paper written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. In the paper, Ioannidis argues that a large number, if not the majority, of published medical research papers contain results that cannot be replicated. The paper is considered foundational to the field of metascience.

## Argument

Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by $\displaystyle{ \mathbb{P}(\text{True}) }$. When a study is conducted, the probability that a positive result is obtained is $\displaystyle{ \mathbb{P}(+) }$. Given these two factors, we want to compute the conditional probability $\displaystyle{ \mathbb{P}(\text{True}\mid +) }$, which is known as the positive predictive value (PPV). Bayes' theorem allows us to compute the PPV as:$\displaystyle{ \mathbb{P}(\text{True} \mid +) = {(1-\beta)\mathbb{P}(\text{True})\over{(1-\beta)\mathbb{P}(\text{True}) + \alpha\left[1-\mathbb{P}(\text{True})\right]}} }$where $\displaystyle{ \alpha }$ is the type I error rate and $\displaystyle{ \beta }$ is the type II error rate; the statistical power is $\displaystyle{ 1-\beta }$. It is customary in most scientific research to desire $\displaystyle{ \alpha = 0.05 }$ and $\displaystyle{ \beta = 0.2 }$. If we assume $\displaystyle{ \mathbb{P}(\text{True}) = 0.1 }$ for a given scientific field, then we may compute the PPV for different values of $\displaystyle{ \alpha }$ and $\displaystyle{ \beta }$:

$\displaystyle{ \beta }$ $\displaystyle{ \alpha }$ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.91 0.90 0.89 0.87 0.85 0.82 0.77 0.69 0.53 0.83 0.82 0.80 0.77 0.74 0.69 0.63 0.53 0.36 0.77 0.75 0.72 0.69 0.65 0.60 0.53 0.43 0.27 0.71 0.69 0.66 0.63 0.58 0.53 0.45 0.36 0.22 0.67 0.64 0.61 0.57 0.53 0.47 0.40 0.31 0.18

However, the simple formula for PPV derived from Bayes' theorem does not account for bias in study design or reporting. In the presence of bias $\displaystyle{ u\in[0,1] }$, the PPV is given by the more general expression:$\displaystyle{ \mathbb{P}(\text{True}|+) = {\left[1-(1-u)\beta \right ]\mathbb{P}(\text{True})\over{\left[1-(1-u)\beta \right ]\mathbb{P}(\text{True}) + \left[(1-u)\alpha + u \right ]\left[1-\mathbb{P}(\text{True}) \right ] }} }$The introduction of bias will tend to depress the PPV; in the extreme case when the bias of a study is maximized, $\displaystyle{ \mathbb{P}(\text{True}|+) = \mathbb{P}(\text{True}) }$. Even if a study meets the benchmark requirements for $\displaystyle{ \alpha }$ and $\displaystyle{ \beta }$, and is free of bias, there is still a 36% probability that a paper reporting a positive result will be incorrect; if the base probability of a true result is lower, then this will push the PPV lower too. Furthermore, there is strong evidence that the average statistical power of a study in many scientific fields is well below the benchmark level of 0.8.[2][3][4]

Given the realities of bias, low statistical power, and a small number of true hypotheses, Ioannidis concludes that the majority of studies in a variety of scientific fields are likely to report results that are false.

### Corollaries

In addition to the main result, Ioannidis lists six corollaries for factors that can influence the reliability of published research:

1. The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
3. The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
4. The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
5. The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

## Influence

Despite initial skepticism about the claims made in the paper, Ioannidis's argument has been accepted by a large number of researchers.[5] The growth of metascience and the recognition of a scientific replication crisis have bolstered the paper's credibility, and led to calls for methodological reforms in scientific research.[6][7]