Lindley's paradox
Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook;[1] it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.[2]
Although referred to as a paradox, the differing results from the Bayesian and frequentist approaches can be explained as using them to answer fundamentally different questions, rather than actual disagreement between the two methods.
Nevertheless, for a large class of priors the differences between the frequentist and Bayesian approach are caused by keeping the significance level fixed: as even Lindley recognized, "the theory does not justify the practice of keeping the significance level fixed'' and even "some computations by Prof. Pearson in the discussion to that paper emphasized how the significance level would have to change with the sample size, if the losses and prior probabilities were kept fixed.''[2] In fact, if the critical value increases with the sample size suitably fast, then the disagreement between the frequentist and Bayesian approaches becomes negligible as the sample size increases.[3]
Description of the paradox
The result [math]\displaystyle{ \textstyle x }[/math] of some experiment has two possible explanations, hypotheses [math]\displaystyle{ \textstyle H_0 }[/math] and [math]\displaystyle{ \textstyle H_1 }[/math], and some prior distribution [math]\displaystyle{ \textstyle \pi }[/math] representing uncertainty as to which hypothesis is more accurate before taking into account [math]\displaystyle{ \textstyle x }[/math].
Lindley's paradox occurs when
- The result [math]\displaystyle{ \textstyle x }[/math] is "significant" by a frequentist test of [math]\displaystyle{ \textstyle H_0 }[/math], indicating sufficient evidence to reject [math]\displaystyle{ \textstyle H_0 }[/math], say, at the 5% level, and
- The posterior probability of [math]\displaystyle{ \textstyle H_0 }[/math] given [math]\displaystyle{ \textstyle x }[/math] is high, indicating strong evidence that [math]\displaystyle{ \textstyle H_0 }[/math] is in better agreement with [math]\displaystyle{ \textstyle x }[/math] than [math]\displaystyle{ \textstyle H_1 }[/math].
These results can occur at the same time when [math]\displaystyle{ \textstyle H_0 }[/math] is very specific, [math]\displaystyle{ \textstyle H_1 }[/math] more diffuse, and the prior distribution does not strongly favor one or the other, as seen below.
Numerical example
The following numerical example illustrates Lindley's paradox. In a certain city 49,581 boys and 48,870 girls have been born over a certain time period. The observed proportion [math]\displaystyle{ \textstyle x }[/math] of male births is thus 49,581/98,451 ≈ 0.5036. We assume the fraction of male births is a binomial variable with parameter [math]\displaystyle{ \textstyle \theta }[/math]. We are interested in testing whether [math]\displaystyle{ \textstyle\theta }[/math] is 0.5 or some other value. That is, our null hypothesis is [math]\displaystyle{ \textstyle H_0: \theta=0.5 }[/math] and the alternative is [math]\displaystyle{ \textstyle H_1: \theta \neq 0.5 }[/math].
Frequentist approach
The frequentist approach to testing [math]\displaystyle{ \textstyle H_0 }[/math] is to compute a p-value, the probability of observing a fraction of boys at least as large as [math]\displaystyle{ \textstyle x }[/math] assuming [math]\displaystyle{ \textstyle H_0 }[/math] is true. Because the number of births is very large, we can use a normal approximation for the fraction of male births [math]\displaystyle{ \textstyle X \sim N(\mu, \sigma^2) }[/math], with [math]\displaystyle{ \textstyle \mu = np = n\theta = 98,451 \times 0.5 = 49,225.5 }[/math] and [math]\displaystyle{ \textstyle \sigma^2 = n\theta (1-\theta) = 98,451\times0.5\times0.5 = 24,612.75 }[/math], to compute
- [math]\displaystyle{ \begin{align}P(X \geq x \mid \mu=49225.5) = \int_{x = 49581}^{98451}\frac{1}{\sqrt{2\pi\sigma^2}}e^{-(\frac{u-\mu}{\sigma})^2/2}du \\ =\int_{x = 49581}^{98451}\frac{1}{\sqrt{2\pi(24,612.75)}}e^{-\frac{(u-49225.5)^2}{24612.75}/2}du \approx 0.0117.\end{align} }[/math]
We would have been equally surprised if we had seen 49,581 female births, i.e. [math]\displaystyle{ \textstyle x\approx 0.4964 }[/math], so a frequentist would usually perform a two-sided test, for which the p-value would be [math]\displaystyle{ \textstyle p \approx 2\times 0.0117 = 0.0235 }[/math]. In both cases, the p-value is lower than the significance level, α, of 5%, so the frequentist approach rejects [math]\displaystyle{ \textstyle H_0 }[/math] as it disagrees with the observed data.
Bayesian approach
Assuming no reason to favor one hypothesis over the other, the Bayesian approach would be to assign prior probabilities [math]\displaystyle{ \textstyle \pi(H_0) = \pi(H_1) = 0.5 }[/math] and a uniform distribution to [math]\displaystyle{ \textstyle\theta }[/math] under [math]\displaystyle{ H_1 }[/math], and then to compute the posterior probability of [math]\displaystyle{ \textstyle H_0 }[/math] using Bayes' theorem,
- [math]\displaystyle{ P(H_0 \mid k) = \frac{P(k \mid H_0) \pi(H_0)}{P(k \mid H_0) \pi(H_0) + P(k \mid H_1) \pi(H_1)}. }[/math]
After observing [math]\displaystyle{ \textstyle k = 49,581 }[/math] boys out of [math]\displaystyle{ \textstyle n = 98,451 }[/math] births, we can compute the posterior probability of each hypothesis using the probability mass function for a binomial variable,
- [math]\displaystyle{ \begin{align} P(k \mid H_0) & = {n\choose k}(0.5)^k(1-0.5)^{n-k} \approx 1.95 \times 10^{-4} \\ P(k \mid H_1) & = \int_0^1 {n\choose k}\theta^k (1-\theta)^{n-k} d\theta = {n\choose k} \mathrm{\Beta}(k + 1, n - k + 1) = 1 / (n + 1) \approx 1.02 \times 10^{-5} \end{align} }[/math]
where [math]\displaystyle{ \textstyle \mathrm{\Beta}(a,b) }[/math] is the Beta function.
From these values, we find the posterior probability of [math]\displaystyle{ P(\textstyle H_0 \mid k) \approx 0.95 }[/math], which strongly favors [math]\displaystyle{ \textstyle H_0 }[/math] over [math]\displaystyle{ \textstyle H_1 }[/math].
The two approaches—the Bayesian and the frequentist—appear to be in conflict, and this is the "paradox".
Reconciling the Bayesian and frequentist approaches
Almost sure hypothesis testing
Naaman[3] proposed an adaption of the significance level to the sample size in order to control false positives: αn, such that αn = n−r with r > 1/2. At least in the numerical example, taking r = 1/2, results in a significance level of 0.00318, so the frequentist would not reject the null hypothesis, which is in agreement with the Bayesian approach.
Uninformative priors
If we use an uninformative prior and test a hypothesis more similar to that in the frequentist approach, the paradox disappears.
For example, if we calculate the posterior distribution [math]\displaystyle{ \textstyle P(\theta \mid x, n) }[/math], using a uniform prior distribution on [math]\displaystyle{ \textstyle \theta }[/math] (i.e. [math]\displaystyle{ \textstyle \pi(\theta \in [0,1]) = 1 }[/math]), we find
- [math]\displaystyle{ P(\theta \mid k, n) = \mathrm{\Beta}(k + 1, n - k + 1). }[/math]
If we use this to check the probability that a newborn is more likely to be a boy than a girl, i.e. [math]\displaystyle{ P(\theta \gt 0.5 \mid k, n) }[/math], we find
- [math]\displaystyle{ \int_{0.5}^1 \mathrm{\Beta}(49582, 48871) \approx 0.983. }[/math]
In other words, it is very likely that the proportion of male births is above 0.5.
Neither analysis gives an estimate of the effect size, directly, but both could be used to determine, for instance, if the fraction of boy births is likely to be above some particular threshold.
The lack of an actual paradox
The apparent disagreement between the two approaches is caused by a combination of factors. First, the frequentist approach above tests [math]\displaystyle{ \textstyle H_0 }[/math] without reference to [math]\displaystyle{ \textstyle H_1 }[/math]. The Bayesian approach evaluates [math]\displaystyle{ \textstyle H_0 }[/math] as an alternative to [math]\displaystyle{ \textstyle H_1 }[/math], and finds the first to be in better agreement with the observations. This is because the latter hypothesis is much more diffuse, as [math]\displaystyle{ \textstyle \theta }[/math] can be anywhere in [math]\displaystyle{ \textstyle [0, 1] }[/math], which results in it having a very low posterior probability. To understand why, it is helpful to consider the two hypotheses as generators of the observations:
- Under [math]\displaystyle{ \textstyle H_0 }[/math], we choose [math]\displaystyle{ \textstyle \theta\approx0.500 }[/math], and ask how likely it is to see 49,581 boys in 98,451 births.
- Under [math]\displaystyle{ \textstyle H_1 }[/math], we choose [math]\displaystyle{ \textstyle \theta }[/math] randomly from anywhere within 0 to 1, and ask the same question.
Most of the possible values for [math]\displaystyle{ \textstyle \theta }[/math] under [math]\displaystyle{ \textstyle H_1 }[/math] are very poorly supported by the observations. In essence, the apparent disagreement between the methods is not a disagreement at all, but rather two different statements about how the hypotheses relate to the data:
- The frequentist finds that [math]\displaystyle{ \textstyle H_0 }[/math] is a poor explanation for the observation.
- The Bayesian finds that [math]\displaystyle{ \textstyle H_0 }[/math] is a far better explanation for the observation than [math]\displaystyle{ \textstyle H_1 }[/math].
The ratio of the sex of newborns is improbably 50/50 male/female, according to the frequentist test. Yet 50/50 is a better approximation than most, but not all, other ratios. The hypothesis [math]\displaystyle{ \textstyle \theta \approx 0.504 }[/math] would have fit the observation much better than almost all other ratios, including [math]\displaystyle{ \textstyle\theta \approx 0.500 }[/math].
For example, this choice of hypotheses and prior probabilities implies the statement: "if [math]\displaystyle{ \textstyle \theta }[/math] > 0.49 and [math]\displaystyle{ \textstyle \theta }[/math] < 0.51, then the prior probability of [math]\displaystyle{ \theta }[/math] being exactly 0.5 is 0.50/0.51 [math]\displaystyle{ \approx }[/math] 98%." Given such a strong preference for [math]\displaystyle{ \theta=0.5 }[/math], it is easy to see why the Bayesian approach favors [math]\displaystyle{ H_0 }[/math] in the face of [math]\displaystyle{ x\approx 0.5036 }[/math], even though the observed value of [math]\displaystyle{ x }[/math] lies [math]\displaystyle{ 2.28\sigma }[/math] away from 0.5. The deviation of over 2 sigma from [math]\displaystyle{ H_0 }[/math] is considered significant in the frequentist approach, but its significance is overruled by the prior in the Bayesian approach.
Looking at it another way, we can see that the prior distribution is essentially flat with a delta function at [math]\displaystyle{ \textstyle \theta = 0.5 }[/math]. Clearly this is dubious. In fact if you were to picture real numbers as being continuous, then it would be more logical to assume that it would be impossible for any given number to be exactly the parameter value, i.e., we should assume [math]\displaystyle{ P(\theta = 0.5) = 0 }[/math].
A more realistic distribution for [math]\displaystyle{ \textstyle \theta }[/math] in the alternative hypothesis produces a less surprising result for the posterior of [math]\displaystyle{ \textstyle H_0 }[/math]. For example, if we replace [math]\displaystyle{ \textstyle H_1 }[/math] with [math]\displaystyle{ \textstyle H_2: \theta = x }[/math], i.e., the maximum likelihood estimate for [math]\displaystyle{ \textstyle \theta }[/math], the posterior probability of [math]\displaystyle{ \textstyle H_0 }[/math] would be only 0.07 compared to 0.93 for [math]\displaystyle{ \textstyle H_2 }[/math] (Of course, one cannot actually use the MLE as part of a prior distribution).
Recent discussion
The paradox continues to be a source of active discussion.[3][4][5][6]
See also
Notes
- ↑ Theory of Probability. Oxford University Press. 1939.
- ↑ 2.0 2.1 "A statistical paradox". Biometrika 44 (1–2): 187–192. 1957. doi:10.1093/biomet/44.1-2.187.
- ↑ 3.0 3.1 3.2 Naaman, Michael (2016-01-01). "Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox" (in EN). Electronic Journal of Statistics 10 (1): 1526–1550. doi:10.1214/16-EJS1146. ISSN 1935-7524. http://projecteuclid.org/euclid.ejs/1464710240.
- ↑ Spanos, Aris (2013). "Who should be afraid of the Jeffreys-Lindley paradox?". Philosophy of Science 80 (1): 73–93. doi:10.1086/668875.
- ↑ "Testing a precise null hypothesis: The case of Lindley's paradox". Philosophy of Science 80 (5): 733–744. 2013. doi:10.1086/673730. http://philsci-archive.pitt.edu/9419/1/LindleyPSA.pdf.
- ↑ Robert, Christian P. (2014). "On the Jeffreys-Lindley paradox". Philosophy of Science 81 (2): 216–232. doi:10.1086/675729.
Further reading
- "Lindley's paradox". Journal of the American Statistical Association 77 (378): 325–334. 1982. doi:10.2307/2287244.
Original source: https://en.wikipedia.org/wiki/Lindley's paradox.
Read more |