# p-value

Short description: Function of the observed sample results

In null-hypothesis significance testing, the p-value[note 1] is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct.[2][3] A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Reporting p-values of statistical tests is common practice in academic publications of many quantitative fields. Since the precise meaning of p-value is hard to grasp, misuse is widespread and has been a major topic in metascience.[4][5]

## Basic concepts

In statistics, every conjecture concerning the unknown probability distribution of a collection of random variables representing the observed data $\displaystyle{ X }$ in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not to investigate other specific hypotheses, then such a test is called a null hypothesis test.

As our statistical hypothesis will, by definition, state some property of the distribution, the null hypothesis is the default hypothesis under which that property does not exist. The null hypothesis is typically that some parameter (such as a correlation or a difference between means) in the populations of interest is zero. Note that our hypothesis might specify the probability distribution of $\displaystyle{ X }$ precisely, or it might only specify that it belongs to some class of distributions. Often, we reduce the data to a single numerical statistic, e.g., $\displaystyle{ T }$, whose marginal probability distribution is closely connected to a main question of interest in the study.

The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic $\displaystyle{ T }$.[note 2] The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.

Loosely speaking, rejection of the null hypothesis implies that there is sufficient evidence against it.

As a particular example, if a null hypothesis states that a certain summary statistic $\displaystyle{ T }$ follows the standard normal distribution N(0,1), then the rejection of this null hypothesis could mean that (i) the mean of $\displaystyle{ T }$ is not 0, or (ii) the variance of $\displaystyle{ T }$ is not 1, or (iii) $\displaystyle{ T }$ is not normally distributed. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. However, even if we do manage to reject the null hypothesis for all 3 alternatives, and even if we know the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible. The more independent observations from the same probability distribution one has, the more accurate the test will be, and the higher the precision with which one will be able to determine the mean value and show that it is not equal to zero; but this will also increase the importance of evaluating the real-world or scientific relevance of this deviation.

## Definitions and interpretations

### Definitions

The definition of p-value is simple and unambiguous when there is a natural test statistic with a unimodal distribution on the real line. However, there are more than one way to generalize it to other situations, with varying degrees of abstraction, and there is no general consensus on the "best" generalization.

#### Probability of obtaining a real-valued test statistic at least as extreme as the one actually obtained

Consider an observed test-statistic $\displaystyle{ t }$ from unknown distribution $\displaystyle{ T }$. Then the p-value $\displaystyle{ p }$ is what the prior probability would be of observing a test-statistic value at least as "extreme" as $\displaystyle{ t }$ if null hypothesis $\displaystyle{ H_0 }$ were true. That is:

• $\displaystyle{ p = \Pr(T \geq t \mid H_0) }$ for a one-sided right-tail test,
• $\displaystyle{ p = \Pr(T \leq t \mid H_0) }$ for a one-sided left-tail test,
• $\displaystyle{ p = 2\min\{\Pr(T \geq t \mid H_0),\Pr(T \leq t \mid H_0)\} }$ for a two-sided test. If distribution $\displaystyle{ T }$ is symmetric about zero, then $\displaystyle{ p =\Pr(|T| \geq |t| \mid H_0) }$

#### Probability of obtaining a test statistic at least as extreme as the one actually obtained

The above definition requires the test statistic to $\displaystyle{ t }$ be a real number, or at least totally ordered. When there is no natural real-valued test statistic to compute, such as when the observed data is categorical, a more abstract definition of p-value is as follows:

Let $\displaystyle{ X }$, the observed random variable, take values in $\displaystyle{ \mathcal X }$. For any way division of $\displaystyle{ \mathcal X }$ into "onion rings" as $\displaystyle{ \mathcal X = \mathcal X_1 \supset \mathcal X_2 \supset \mathcal X_3 \supset \cdots }$ with $\displaystyle{ \cap_{n\in\N}\mathcal X_n = \emptyset }$, its corresponding p-value is $\displaystyle{ p(x) }$, defined as follows: For any $\displaystyle{ x\in \mathcal X }$, is the probability $\displaystyle{ \Pr(X\in \mathcal X_t) }$, where $\displaystyle{ \mathcal X_t }$ is the smallest subset in the sequence that still contains $\displaystyle{ x }$.

This generalizes to the continuous case:

Divide $\displaystyle{ \mathcal X }$ into "onion rings" indexed by $\displaystyle{ \R }$ with $\displaystyle{ \cap_{r\in \R}\mathcal X_r = \emptyset }$, then for any $\displaystyle{ x\in \mathcal X }$, the corresponding p-value $\displaystyle{ p(x) }$ is the probability $\displaystyle{ \Pr(X\in \mathcal X_t) }$, where $\displaystyle{ t = \sup \{r : x \in \mathcal X_{r}\} }$.

In other words, we impose a real-valued test statistic $\displaystyle{ T }$ by dividing the range of observations into a nested system of shrinking subsets, and the $\displaystyle{ T }$ value of a particular observation is the smallest subset it is still in. Thus, $\displaystyle{ T }$ can be understood as measuring "extremeness" of observation.

Any of the three p-values defined by a real-valued test statistic $\displaystyle{ T }$ is a special case of this more abstract p-value, or a slightly modified version of it:

• the one-sided right-tail p-value is the p-value corresponding to the partition $\displaystyle{ \mathcal X_t := \{x\in \mathcal X: T(x) \geq t\} }$
• the one-sided left-tail p-value is the one-sided right-tail p-value defined by $\displaystyle{ -T }$.
• If distribution $\displaystyle{ T }$ is symmetric about zero, then the two-sided tail test p-value is the p-value corresponding to the partition $\displaystyle{ \mathcal X_t := \{x\in \mathcal X: |T(x)| \geq t\} }$and in general, it is the minimum of two one-sided p-values.

#### Probability of obtaining a conditional probability at least as extreme as the one actually obtained

While the above definition appears arbitrary, as it allows any division of $\displaystyle{ \mathcal X }$ into "onion rings", if the probability distribution on $\displaystyle{ \mathcal X }$ defined by the null hypothesis is discrete, there is a particularly natural division:$\displaystyle{ \mathcal X_t := \{x\in \mathcal X: \Pr(X=x|H_0 ) \leq t\} }$that is, we consider any observation with a lower conditional probability to be more extreme. Equivalently, we define the test statistic $\displaystyle{ T }$ to be the conditional probability $\displaystyle{ T(x):= \Pr(X=x | H_0) }$.

If the probability distribution $\displaystyle{ \mu_0 }$ on $\displaystyle{ \mathcal X }$ defined by the null hypothesis is not discrete, but is at least differentiable with respect to some natural underlying measure $\displaystyle{ \mu }$, then it has a natural probability density function $\displaystyle{ \rho(x) = \frac{d\mu_0}{d\mu}(x) }$, then define$\displaystyle{ \mathcal X_t := \{x\in \mathcal X: \rho(x) \leq t\} }$To see why "natural" is needed: suppose we do not require $\displaystyle{ \mu }$ to be natural, then we can just choose $\displaystyle{ \mu= \mu_0 }$, which would make $\displaystyle{ \rho = 1 }$, making the p-value always equal to 1, thus trivial.

#### Abstract definition

A yet more abstract definition does not require any ordered statistic, or concept of "extremeness".[6]

A p-value is any test statistic $\displaystyle{ p }$ with range $\displaystyle{ p(x) \in [0, 1] }$, such that for every $\displaystyle{ 0 \leq \alpha \leq 1 }$,$\displaystyle{ \Pr(p(X) \leq \alpha | H_0) \leq \alpha. }$

This more abstract definition includes the previous definitions as special cases, and it is designed exactly for the purpose of allowing significance tests.

### Interpretations

#### p-value as the statistic for performing significance tests

In a significance test, the null hypothesis $\displaystyle{ H_0 }$ is rejected if the p-value is less than or equal to a predefined threshold value $\displaystyle{ \alpha }$, which is referred to as the alpha level or significance level. $\displaystyle{ \alpha }$ is not derived from the data, but rather is set by the researcher before examining the data. $\displaystyle{ \alpha }$ is commonly set to 0.05, though lower alpha levels are sometimes used.

Further, the research does not need to provide the threshold value $\displaystyle{ \alpha }$. If the researcher reports the exact p-value, such as "p=0.03", then a reader can supply their own $\displaystyle{ \alpha }$, and construct their own significance test. In this sense, an exactly reported p-value encapsulates the result of a whole family of significance tests.[6]

The p-value is a function of the chosen test statistic $\displaystyle{ T }$ and is therefore a random variable. If the null hypothesis fixes the probability distribution of $\displaystyle{ T }$ precisely, and if that distribution is continuous, then when the null-hypothesis is true, the p-value is uniformly distributed between 0 and 1. Thus, the p-value is not fixed. If the same test is repeated independently with fresh data, one will obtain a different p-value in each iteration. If the null-hypothesis is composite, or the distribution of the statistic is discrete, the probability of obtaining a p-value less than or equal to any number between 0 and 1 is less than or equal to that number, if the null-hypothesis is true. It remains the case that very small values are relatively unlikely if the null-hypothesis is true, and that a significance test at level $\displaystyle{ \alpha }$ is obtained by rejecting the null-hypothesis if the significance level is less than or equal to $\displaystyle{ \alpha }$.

Different p-values based on independent sets of data can be combined, for instance using Fisher's combined probability test.

### Distribution

When the null hypothesis is true, if it takes the form $\displaystyle{ H_0: \theta = \theta_0 }$, and the underlying random variable is continuous, then the probability distribution of the p-value is uniform on the interval [0,1]. By contrast, if the alternative hypothesis is true, the distribution is dependent on sample size and the true value of the parameter being studied.[7][8]

The distribution of p-values for a group of studies is sometimes called a p-curve.[9] A p-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias or p-hacking.[9][10]

### For composite hypothesis

In parametric hypothesis testing problems, a simple or point hypothesis refers to a hypothesis where the parameter's value is assumed to be a single number. In contrast, in a composite hypothesis the parameter's value is given by a set of numbers. For example, when testing the null hypothesis that a distribution is normal with a mean less than or equal to zero against the alternative that the mean is greater than zero (variance known), the null hypothesis does not specify the probability distribution of the appropriate test statistic. In the just mentioned example that would be the Z-statistic belonging to the one-sided one-sample Z-test. For each possible value of the theoretical mean, the Z-test statistic has a different probability distribution. In these circumstances (the case of a so-called composite null hypothesis) the p-value is defined by taking the least favourable null-hypothesis case, which is typically on the border between null and alternative.

This definition ensures the complementarity of p-values and alpha-levels. If we set the significance level alpha to 0.05, and only reject the null hypothesis if the p-value is less than or equal to 0.05, then our hypothesis test will indeed have significance level (maximal type 1 error rate) 0.05. As Neyman wrote: “The error that a practising statistician would consider the more important to avoid (which is a subjective judgment) is called the error of the first kind. The first demand of the mathematical theory is to deduce such test criteria as would ensure that the probability of committing an error of the first kind would equal (or approximately equal, or not exceed) a preassigned number α, such as α = 0.05 or 0.01, etc. This number is called the level of significance”; Neyman 1976, p. 161 in "The Emergence of Mathematical Statistics: A Historical Sketch with Particular Reference to the United States","On the History of Statistics and Probability", ed. D.B. Owen, New York: Marcel Dekker, pp. 149-193. See also "Confusion Over Measures of Evidence (p's) Versus Errors (a's) in Classical Statistical Testing", Raymond Hubbard and M. J. Bayarri, The American Statistician, August 2003, Vol. 57, No 3, 171--182 (with discussion). For a concise modern statement see Chapter 10 of "All of Statistics: A Concise Course in Statistical Inference", Springer; 1st Corrected ed. 20 edition (September 17, 2004). Larry Wasserman.

## Usage

The p-value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing. In this method, before conducting the study, one first chooses a model (the null hypothesis) and the alpha level α (most commonly .05). After analyzing the data, if the p-value is less than α, that is taken to mean that the observed data is sufficiently inconsistent with the null hypothesis for the null hypothesis to be rejected. However, that does not prove that the null hypothesis is false. The p-value does not, in itself, establish probabilities of hypotheses. Rather, it is a tool for deciding whether to reject the null hypothesis.[11]

### Misuse

Main page: Misuse of p-values

According to the ASA, there is widespread agreement that p-values are often misused and misinterpreted.[3] One practice that has been particularly criticized is accepting the alternative hypothesis for any p-value nominally less than .05 without other supporting evidence. Although p-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis".[3] Another concern is that the p-value is often misunderstood as being the probability that the null hypothesis is true.[3][12]

Some statisticians have proposed abandoning p-values and focusing more on other inferential statistics,[3] such as confidence intervals,[13][14] likelihood ratios,[15][16] or Bayes factors,[17][18][19] but there is heated debate on the feasibility of these alternatives.[20]Cite error: Closing </ref> missing for <ref> tag John Arbuthnot studied this question in 1710,[21][22][23][24] and examined birth records in London for each of the 82 years from 1629 to 1710. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 1/282, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, the p-value. This is vanishingly small, leading Arbuthnot that this was not due to chance, but to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the p = 1/282 significance level. This and other work by Arbuthnot is credited as "… the first use of significance tests …"[25] the first example of reasoning about statistical significance,[26] and "… perhaps the first published report of a nonparametric test …",[22] specifically the sign test; see details at Sign test § History.

The same question was later addressed by Pierre-Simon Laplace, who instead used a parametric test, modeling the number of male births with a binomial distribution:[27]

In the 1770s Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of a p-value that the excess was a real, but unexplained, effect.

The p-value was first formally introduced by Karl Pearson, in his Pearson's chi-squared test,[28] using the chi-squared distribution and notated as capital P.[28] The p-values for the chi-squared distribution (for various values of χ2 and degrees of freedom), now notated as P, were calculated in (Elderton 1902), collected in (Pearson 1914).

The use of the p-value in statistics was popularized by Ronald Fisher,[29] and it plays a central role in his approach to the subject.[30] In his influential book Statistical Methods for Research Workers (1925), Fisher proposed the level p = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for statistical significance, and applied this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance (see 68–95–99.7 rule).[31][note 3][32]

He then computed a table of values, similar to Elderton but, importantly, reversed the roles of χ2 and p. That is, rather than computing p for different values of χ2 (and degrees of freedom n), he computed values of χ2 that yield specified p-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01.[33] That allowed computed values of χ2 to be compared against cutoffs and encouraged the use of p-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting p-values themselves. The same type of tables were then compiled in (Fisher Yates), which cemented the approach.[32]

As an illustration of the application of p-values to the design and interpretation of experiments, in his following book The Design of Experiments (1935), Fisher presented the lady tasting tea experiment,[34] which is the archetypal example of the p-value.

To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In that case, the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the p-value was $\displaystyle{ 1/\binom{8}{4} = 1/70 \approx 0.014, }$ so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.)

Fisher reiterated the p = 0.05 threshold and explained its rationale, stating:[35]

It is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.

He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded a p-value of $\displaystyle{ 1/\binom{6}{3} = 1/20 = 0.05, }$ which would not have met this level of significance.[35] Fisher also underlined the interpretation of p, as the long-run proportion of values at least as extreme as the data, assuming the null hypothesis is true.

In later editions, Fisher explicitly contrasted the use of the p-value for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures".[36] Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exact p-value can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clear-cut decision, yielding an irreversible action, and the procedure is based on costs of error, which, he argues, are inapplicable to scientific research.

## Related indices

The E-value corresponds to the expected number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true.[37] The E-value is the product of the number of tests and the p-value.

The q-value is the analog of the p-value with respect to the positive false discovery rate.[38] It is used in multiple hypothesis testing to maintain statistical power while minimizing the false positive rate.[39]

The Probability of Direction (pd) is the Bayesian numerical equivalent of the p-value.[40] It corresponds to the proportion of the posterior distribution that is of the median's sign, typically varying between 50% and 100%, and representing the certainty with which an effect is positive or negative.

## Notes

1. Italicisation, capitalisation and hyphenation of the term varies. For example, AMA style uses "P value", APA style uses "p value", and the American Statistical Association uses "p-value".[1]
2. The statistical significance of a result does not imply that the result also has real-world relevance. For instance, a medicine might have a statistically significant effect that is too small to be interesting.
3. To be more specific, the p = 0.05 corresponds to about 1.96 standard deviations for a normal distribution (two-tailed test), and 2 standard deviations corresponds to about a 1 in 22 chance of being exceeded by chance, or p ≈ 0.045; Fisher notes these approximations.

## References

1. "ASA House Style". Amstat News. American Statistical Association.
2. "The ASA's Statement on p-Values: Context, Process, and Purpose". The American Statistician 70 (2): 129–133. 7 March 2016. doi:10.1080/00031305.2016.1154108.
3. "Why P Values Are Not a Useful Measure of Evidence in Statistical Significance Testing". Theory & Psychology 18 (1): 69–88. 2008. doi:10.1177/0959354307086923.
4. "A manifesto for reproducible science". Nature Human Behaviour 1: 0021. January 2017. doi:10.1038/s41562-016-0021. PMID 33954258.
5. Casella, George (2002). "Section 8.3.4: p-values". Statistical inference. Roger L. Berger (2 ed.). Australia: Thomson Learning. ISBN 0-534-24312-6. OCLC 46538638.
6. "Median of the p value under the alternative hypothesis". The American Statistician 56 (3): 202–6. 2002. doi:10.1198/000313002146.
7. "The behavior of the P-value when the alternative hypothesis is true". Biometrics 53 (1): 11–22. March 1997. doi:10.2307/2533093. PMID 9147587.
8. "The extent and consequences of p-hacking in science". PLoS Biology 13 (3): e1002106. March 2015. doi:10.1371/journal.pbio.1002106. PMID 25768323.
9. "p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results". Perspectives on Psychological Science 9 (6): 666–681. November 2014. doi:10.1177/1745691614553988. PMID 26186117.
10. "Scientific method: statistical errors". Nature 506 (7487): 150–152. February 2014. doi:10.1038/506150a. PMID 24522584. Bibcode2014Natur.506..150N.
11. "An investigation of the false discovery rate and the misinterpretation of p-values". Royal Society Open Science 1 (3): 140216. November 2014. doi:10.1098/rsos.140216. PMID 26064558. Bibcode2014RSOS....140216C.
12. "Alternatives to P value: confidence interval and effect size". Korean Journal of Anesthesiology 69 (6): 555–562. December 2016. doi:10.4097/kjae.2016.69.6.555. PMID 27924194.
13. "Why the P-value culture is bad and confidence intervals a better alternative". Osteoarthritis and Cartilage 20 (8): 805–808. August 2012. doi:10.1016/j.joca.2012.04.001. PMID 22503814.
14. "Sifting the evidence. Likelihood ratios are alternatives to P values". BMJ 322 (7295): 1184–1185. May 2001. doi:10.1136/bmj.322.7295.1184. PMID 11379590.
15. "The Likelihood Paradigm for Statistical Evidence" (in en). The Nature of Scientific Evidence. 2004. pp. 119–152. doi:10.7208/chicago/9780226789583.003.0005. ISBN 9780226789576.
16. "Hypothesis Testing: From p Values to Bayes Factors". Journal of the American Statistical Association 95 (452): 1316–1320. December 2000. doi:10.2307/2669779.
17. "A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference". Multivariate Behavioral Research 51 (1): 23–29. 16 February 2016. doi:10.1080/00273171.2015.1099032. PMID 26881954.
18. "In defense of P values". Ecology 95 (3): 611–617. March 2014. doi:10.1890/13-0590.1. PMID 24804441.
19. "An argument for Divine Providence, taken from the constant regularity observed in the births of both sexes". Philosophical Transactions of the Royal Society of London 27 (325–336): 186–190. 1710. doi:10.1098/rstl.1710.0011.
20. "Chapter 3.4: The Sign Test". Practical Nonparametric Statistics (Third ed.). Wiley. 1999. pp. 157–176. ISBN 978-0-471-16068-7.
21. Applied Nonparametric Statistical Methods (Second ed.). Chapman & Hall. 1989. ISBN 978-0-412-44980-2.
22. The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. 1986. pp. 225–226. ISBN 978-0-67440341-3.
23. "John Arbuthnot". Statisticians of the Centuries. Springer. 2001. pp. 39–42. ISBN 978-0-387-95329-8.
24. "Chapter 4. Chance or Design: Tests of Significance". A History of Mathematical Statistics from 1750 to 1930. Wiley. 1998. pp. 65.
25. The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. 1986. p. 134. ISBN 978-0-67440341-3.
26. "Confusion Over Measures of Evidence (p′s) Versus Errors (α′s) in Classical Statistical Testing", The American Statistician 57 (3): 171–178 [p. 171], 2003, doi:10.1198/0003130031856
27. Fisher 1925, p. 47, Chapter III. Distributions.
28. Dallal 2012, Note 31: Why P=0.05?.
29. Fisher 1971, II. The Principles of Experimentation, Illustrated by a Psycho-physical Experiment.
30. Fisher 1971, Section 7. The Test of Significance.
31. Fisher 1971, Section 12.1 Scientific Inference and Acceptance Procedures.
32. "Definition of E-value". National Institutes of Health.
33. "The positive false discovery rate: a Bayesian interpretation and the q-value". The Annals of Statistics 31 (6): 2013–2035. 2003. doi:10.1214/aos/1074290335.
34. "Statistical significance for genomewide studies". Proceedings of the National Academy of Sciences of the United States of America 100 (16): 9440–9445. August 2003. doi:10.1073/pnas.1530509100. PMID 12883005. Bibcode2003PNAS..100.9440S.
35. "Indices of Effect Existence and Significance in the Bayesian Framework". Frontiers in Psychology 10: 2767. 10 December 2019. doi:10.3389/fpsyg.2019.02767. PMID 31920819.