Neyman-pearson diagram

From HandWiki
Revision as of 11:45, 5 August 2021 by imported>PolicyEnforcerIA (attribution)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


A diagram (also named a decision quality diagram ) used in optimizing decision strategies with a single test statistic. The assumption is that samples of events or probability density functions are available both for signal (authentic) and background (imposter) events; a suitable test statistic is then sought which optimally distinguishes between the two. Using a given test statistic (or discriminant function ), one can introduce a cut which separates an acceptance region (dominated by signal events) from a rejection region (dominated by background). The Neyman-Pearson diagram plots contamination (misclassified background events, i.e. classified as signal) against losses (misclassified signal events, i.e. classified as background), both as fractions of the total sample.

An ideal test statistic causes the curve to pass close to the point where both losses and contamination are zero, i.e. the acceptance is one for signals, zero for background (see figure). Different decision strategies choose a point of closest approach, where a ``liberal strategy favours minimal loss (i.e. high acceptance of signal), a ``conservative one favours minimal contamination (i.e. high purity of signal).

For a given test (fixed cut parameter), the relative fraction of losses (i.e. the probability of rejecting good events, which is the complement of acceptance), is also called the significance or the cost of the test; the relative fraction of contamination (i.e. the probability of accepting background events) is denominated the power or purity of the test.

Hypothesis testing may, of course, allow for more than just two hypotheses, or use a combination of different test statistics. In both cases, the dimensionality of the problem is increased, and a simple diagram becomes inadequate, as the curve relating losses and contamination becomes a (hyper-) surface, the decision boundary . Often, the problem is simplified by imposing a fixed significance, and optimizing separately the test statistics to distinguish between pairs of hypotheses. Given large training samples, neural networks can contribute to optimizing the general decision or classification problem.