Neyman–Pearson lemma
In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933.[1] The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior.[2][3][4] The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman-Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all [math]\displaystyle{ \alpha }[/math] level tests while subsequently minimizing type II error, traditionally denoted by [math]\displaystyle{ \beta }[/math]. Their seminal paper of 1933, including the Neyman-Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error ([math]\displaystyle{ \alpha }[/math]), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman-Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.
Statement
Consider a test with hypotheses [math]\displaystyle{ H_0: \theta = \theta_0 }[/math] and [math]\displaystyle{ H_1:\theta=\theta_1 }[/math], where the probability density function (or probability mass function) is [math]\displaystyle{ \rho(x\mid \theta_i) }[/math] for [math]\displaystyle{ i=0,1 }[/math].
For any hypothesis test with rejection set [math]\displaystyle{ R }[/math], and any [math]\displaystyle{ \alpha\in [0, 1] }[/math], we say that it satisfies condition [math]\displaystyle{ P_\alpha }[/math] if
- [math]\displaystyle{ \alpha = Pr_{\theta_0}(X\in R) }[/math]
- That is, the test has size [math]\displaystyle{ \alpha }[/math] (that is, the probability of falsely rejecting the null hypothesis is [math]\displaystyle{ \alpha }[/math]).
- [math]\displaystyle{ \exists \eta \geq 0 }[/math] such that[math]\displaystyle{
\begin{align}
x\in& R\setminus A\implies \rho(x\mid \theta_1) \gt \eta \rho(x\mid \theta_0) \\
x\in& R^c\setminus A \implies \rho(x\mid\theta_1) \lt \eta \rho(x\mid \theta_0)
\end{align} }[/math]
where [math]\displaystyle{ A }[/math] is a negligible set in both [math]\displaystyle{ \theta_0 }[/math] and [math]\displaystyle{ \theta_1 }[/math] cases: [math]\displaystyle{ Pr_{\theta_0}(X\in A) = Pr_{\theta_1}(X\in A) = 0 }[/math].- That is, we have a strict likelihood ratio test, except on a negligible subset.
For any [math]\displaystyle{ \alpha\in [0, 1] }[/math], let the set of level [math]\displaystyle{ \alpha }[/math] tests be the set of all hypothesis tests with size at most [math]\displaystyle{ \alpha }[/math]. That is, letting its rejection set be [math]\displaystyle{ R }[/math], we have [math]\displaystyle{ Pr_{\theta_0}(X\in R)\leq \alpha }[/math].
Neyman-Pearson lemma[5] — Existence:
If a hypothesis test satisfies [math]\displaystyle{ P_\alpha }[/math] condition, then it is a uniformly most powerful (UMP) test in the set of level [math]\displaystyle{ \alpha }[/math] tests.
Uniqueness: If there exists a hypothesis test [math]\displaystyle{ R_{NP} }[/math] that satisfies [math]\displaystyle{ P_\alpha }[/math] condition, with [math]\displaystyle{ \eta \gt 0 }[/math] , then every UMP test [math]\displaystyle{ R }[/math] in the set of level [math]\displaystyle{ \alpha }[/math] tests satisfies [math]\displaystyle{ P_\alpha }[/math] condition with the same [math]\displaystyle{ \eta }[/math].
Further, the [math]\displaystyle{ R_{NP} }[/math] test and the [math]\displaystyle{ R }[/math] test agree with probability [math]\displaystyle{ 1 }[/math] whether [math]\displaystyle{ \theta = \theta_0 }[/math] or [math]\displaystyle{ \theta = \theta_1 }[/math].
In practice, the likelihood ratio is often used directly to construct tests — see likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).
Given any hypothesis test with rejection set [math]\displaystyle{ R }[/math], define its statistical power function [math]\displaystyle{ \beta_R(\theta) = Pr_\theta(X\in R) }[/math].
Existence:
Given some hypothesis test that satisfies [math]\displaystyle{ P_\alpha }[/math] condition, call its rejection region [math]\displaystyle{ R_{NP} }[/math] (where NP stands for Neyman-Pearson).
For any level [math]\displaystyle{ \alpha }[/math] hypothesis test with rejection region [math]\displaystyle{ R }[/math] we have [math]\displaystyle{ [1_{R_{NP}}(x) - 1_R(x)][\rho(x|\theta_1) - \eta \rho(x|\theta_0)] \geq 0 }[/math] except on some ignorable set [math]\displaystyle{ A }[/math].
Then integrate it over [math]\displaystyle{ x }[/math] to obtain [math]\displaystyle{ 0 \leq [\beta_{R_{NP}}(\theta_1) - \beta_R(\theta_1)] - \eta[\beta_{R_{NP}}(\theta_0) - \beta_R(\theta_0)] }[/math].
Since [math]\displaystyle{ \beta_{R_{NP}}(\theta_0) = \alpha }[/math] and [math]\displaystyle{ \beta_R(\theta_0) \leq \alpha }[/math], we find that [math]\displaystyle{ \beta_{R_{NP}}(\theta_1) \geq \beta_R(\theta_1) }[/math].
Thus the [math]\displaystyle{ R_{NP} }[/math] rejection test is a UMP test in the set of level [math]\displaystyle{ \alpha }[/math] tests.
Uniqueness:
For any other UMP level [math]\displaystyle{ \alpha }[/math] test, with rejection region [math]\displaystyle{ R }[/math], we have from Existence part, [math]\displaystyle{ [\beta_{R_{NP}}(\theta_1) - \beta_R(\theta_1)] \geq \eta[\beta_{R_{NP}}(\theta_0) - \beta_R(\theta_0)] }[/math].
Since the [math]\displaystyle{ R }[/math] test is UMP, the left side must be zero. Since [math]\displaystyle{ \eta \gt 0 }[/math] the right side gives [math]\displaystyle{ \beta_R(\theta_0) = \beta_{R_{NP}}(\theta_0) =\alpha }[/math], so the [math]\displaystyle{ R }[/math] test has size [math]\displaystyle{ \alpha }[/math].
Since the integrand [math]\displaystyle{ [1_{R_{NP}}(x) - 1_R(x)][\rho(x|\theta_1) - \eta \rho(x|\theta_0)] }[/math] is nonnegative, and integrates to zero, it must be exactly zero except on some ignorable set [math]\displaystyle{ A }[/math].
Since the [math]\displaystyle{ R_{NP} }[/math] test satisfies [math]\displaystyle{ P_\alpha }[/math] condition, let the ignorable set in the definition of [math]\displaystyle{ P_\alpha }[/math] condition be [math]\displaystyle{ A_{NP} }[/math].
[math]\displaystyle{ R\setminus (R_{NP}\cup A_{NP}) }[/math] is ignorable, since for all [math]\displaystyle{ x\in R\setminus (R_{NP}\cup A_{NP}) }[/math], we have [math]\displaystyle{ [1_{R_{NP}}(x) - 1_R(x)][\rho(x|\theta_1) - \eta \rho(x|\theta_0)] = \eta \rho(x|\theta_0)-\rho(x|\theta_1)\gt 0 }[/math].
Similarly, [math]\displaystyle{ R_{NP}\setminus (R\cup A_{NP}) }[/math] is ignorable.
Define [math]\displaystyle{ A_R := (R\Delta R_{NP})\cup A_{NP} }[/math] (where [math]\displaystyle{ \Delta }[/math] means symmetric difference). It is the union of three ignorable sets, thus it is an ignorable set.
Then we have [math]\displaystyle{ x\in R\setminus A_R\implies \rho(x|\theta_1) \gt \eta \rho(x | \theta_0) }[/math] and [math]\displaystyle{ x\in R^c\setminus A_R \implies \rho(x|\theta_1) \lt \eta \rho(x | \theta_0) }[/math]. So the [math]\displaystyle{ R }[/math] rejection test satisfies [math]\displaystyle{ P_\alpha }[/math] condition with the same [math]\displaystyle{ \eta }[/math].
Since [math]\displaystyle{ A_R }[/math] is ignorable, its subset [math]\displaystyle{ R \Delta R_{NP}\subset A_R }[/math] is also ignorable. Consequently, the two tests agree with probability [math]\displaystyle{ 1 }[/math] whether [math]\displaystyle{ \theta = \theta_0 }[/math] or [math]\displaystyle{ \theta = \theta_1 }[/math].
Example
Let [math]\displaystyle{ X_1,\dots,X_n }[/math] be a random sample from the [math]\displaystyle{ \mathcal{N}(\mu,\sigma^2) }[/math] distribution where the mean [math]\displaystyle{ \mu }[/math] is known, and suppose that we wish to test for [math]\displaystyle{ H_0:\sigma^2=\sigma_0^2 }[/math] against [math]\displaystyle{ H_1:\sigma^2=\sigma_1^2 }[/math]. The likelihood for this set of normally distributed data is
- [math]\displaystyle{ \mathcal{L}\left(\sigma^2\mid\mathbf{x}\right)\propto \left(\sigma^2\right)^{-n/2} \exp\left\{-\frac{\sum_{i=1}^n (x_i-\mu)^2}{2\sigma^2}\right\}. }[/math]
We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome:
- [math]\displaystyle{ \Lambda(\mathbf{x}) = \frac{\mathcal{L}\left({\sigma_0}^2\mid\mathbf{x}\right)}{\mathcal{L}\left({\sigma_1}^2\mid\mathbf{x}\right)} = \left(\frac{\sigma_0^2}{\sigma_1^2}\right)^{-n/2} \exp\left\{-\frac{1}{2}(\sigma_0^{-2} -\sigma_1^{-2})\sum_{i=1}^n (x_i-\mu)^2\right\}. }[/math]
This ratio only depends on the data through [math]\displaystyle{ \sum_{i=1}^n (x_i-\mu)^2 }[/math]. Therefore, by the Neyman–Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on [math]\displaystyle{ \sum_{i=1}^n (x_i-\mu)^2 }[/math]. Also, by inspection, we can see that if [math]\displaystyle{ \sigma_1^2\gt \sigma_0^2 }[/math], then [math]\displaystyle{ \Lambda(\mathbf{x}) }[/math] is a decreasing function of [math]\displaystyle{ \sum_{i=1}^n (x_i-\mu)^2 }[/math]. So we should reject [math]\displaystyle{ H_0 }[/math] if [math]\displaystyle{ \sum_{i=1}^n (x_i-\mu)^2 }[/math] is sufficiently large. The rejection threshold depends on the size of the test. In this example, the test statistic can be shown to be a scaled Chi-square distributed random variable and an exact critical value can be obtained.
Application in economics
A variant of the Neyman–Pearson lemma has found an application in the seemingly unrelated domain of the economics of land value. One of the fundamental problems in consumer theory is calculating the demand function of the consumer given the prices. In particular, given a heterogeneous land-estate, a price measure over the land, and a subjective utility measure over the land, the consumer's problem is to calculate the best land parcel that they can buy – i.e. the land parcel with the largest utility, whose price is at most their budget. It turns out that this problem is very similar to the problem of finding the most powerful statistical test, and so the Neyman–Pearson lemma can be used.[6]
Uses in electrical engineering
The Neyman–Pearson lemma is quite useful in electronics engineering, namely in the design and use of radar systems, digital communication systems, and in signal processing systems. In radar systems, the Neyman–Pearson lemma is used in first setting the rate of missed detections to a desired (low) level, and then minimizing the rate of false alarms, or vice versa. Neither false alarms nor missed detections can be set at arbitrarily low rates, including zero. All of the above goes also for many systems in signal processing.
Uses in particle physics
The Neyman–Pearson lemma is applied to the construction of analysis-specific likelihood-ratios, used to e.g. test for signatures of new physics against the nominal Standard Model prediction in proton-proton collision datasets collected at the LHC.[7]
Discovery of the lemma
Neyman wrote about the discovery of the lemma as follows.[8] Paragraph breaks have been inserted.
I can point to the particular moment when I understood how to formulate the undogmatic problem of the most powerful test of a simple statistical hypothesis against a fixed simple alternative. At the present time [probably 1968], the problem appears entirely trivial and within easy reach of a beginning undergraduate. But, with a degree of embarrassment, I must confess that it took something like half a decade of combined effort of E. S. P. [Egon Pearson] and myself to put things straight.The solution of the particular question mentioned came on an evening when I was sitting alone in my room at the Statistical Laboratory of the School of Agriculture in Warsaw, thinking hard on something that should have been obvious long before. The building was locked up and, at about 8 p.m., I heard voices outside calling me. This was my wife, with some friends, telling me that it was time to go to a movie.
My first reaction was that of annoyance. And then, as I got up from my desk to answer the call, I suddenly understood: for any given critical region and for any given alternative hypothesis, it is possible to calculate the probability of the error of the second kind; it is represented by this particular integral. Once this is done, the optimal critical region would be the one which minimizes this same integral, subject to the side condition concerned with the probability of the error of the first kind. We are faced with a particular problem of the calculus of variation, probably a simple problem.
These thoughts came in a flash, before I reached the window to signal to my wife. The incident is clear in my memory, but I have no recollections about the movie we saw. It may have been Buster Keaton.
See also
References
- ↑ Neyman, J.; Pearson, E. S. (1933-02-16). "IX. On the problem of the most efficient tests of statistical hypotheses" (in en). Phil. Trans. R. Soc. Lond. A 231 (694–706): 289–337. doi:10.1098/rsta.1933.0009. ISSN 0264-3952. Bibcode: 1933RSPTA.231..289N.
- ↑ The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?: Journal of the American Statistical Association: Vol 88, No 424: The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?: Journal of the American Statistical Association: Vol 88, No 424
- ↑ Wald: Chapter II: The Neyman-Pearson Theory of Testing a Statistical Hypothesis: Wald: Chapter II: The Neyman-Pearson Theory of Testing a Statistical Hypothesis
- ↑ The Empire of Chance: The Empire of Chance
- ↑ Casella, George (2002). Statistical inference. Roger L. Berger (2 ed.). Australia: Thomson Learning. pp. 388, Theorem 8.3.12. ISBN 0-534-24312-6. OCLC 46538638. https://www.worldcat.org/oclc/46538638.
- ↑ Berliant, M. (1984). "A characterization of the demand for land". Journal of Economic Theory 33 (2): 289–300. doi:10.1016/0022-0531(84)90091-7.
- ↑ van Dyk, David A. (2014). "The Role of Statistics in the Discovery of a Higgs Boson". Annual Review of Statistics and Its Application 1 (1): 41–59. doi:10.1146/annurev-statistics-062713-085841. Bibcode: 2014AnRSA...1...41V.
- ↑ Neyman, J. (1970). A glance at some of my personal experiences in the process of research. In Scientists at Work: Festschrift in honour of Herman Wold. Edited by T. Dalenius, G. Karlsson, S. Malmquist. Almqvist & Wiksell, Stockholm. https://worldcat.org/en/title/195948
- E. L. Lehmann, Joseph P. Romano, Testing statistical hypotheses, Springer, 2008, p. 60
External links
- Cosma Shalizi gives an intuitive derivation of the Neyman–Pearson Lemma using ideas from economics
- cnx.org: Neyman–Pearson criterion
Original source: https://en.wikipedia.org/wiki/Neyman–Pearson lemma.
Read more |