Dvoretzky–Kiefer–Wolfowitz inequality

From HandWiki
Short description: Statistical inequality
The above chart shows an example application of the DKW inequality in constructing confidence bounds (in purple) around an empirical distribution function (in light blue). In this random draw, the true CDF (orange) is entirely contained within the DKW bounds.

In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) bounds how close an empirically determined distribution function will be to the distribution function from which the empirical samples are drawn. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality

[math]\displaystyle{ \Pr\Bigl(\sup_{x\in\mathbb R} |F_n(x) - F(x)| \gt \varepsilon \Bigr) \le Ce^{-2n\varepsilon^2}\qquad \text{for every }\varepsilon\gt 0. }[/math]

with an unspecified multiplicative constant C in front of the exponent on the right-hand side.[1]

In 1990, Pascal Massart proved the inequality with the sharp constant C = 2,[2] confirming a conjecture due to Birnbaum and McCarty.[3] In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the number of variables, C = 2k.[4]

The DKW inequality

Given a natural number n, let X1, X2, …, Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by

[math]\displaystyle{ F_n(x) = \frac1n \sum_{i=1}^n \mathbf{1}_{\{X_i\leq x\}},\qquad x\in\mathbb{R}. }[/math]

so [math]\displaystyle{ F(x) }[/math] is the probability that a single random variable [math]\displaystyle{ X }[/math] is smaller than [math]\displaystyle{ x }[/math], and [math]\displaystyle{ F_n(x) }[/math] is the fraction of random variables that are smaller than [math]\displaystyle{ x }[/math].

The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the random function Fn differs from F by more than a given constant ε > 0 anywhere on the real line. More precisely, there is the one-sided estimate

[math]\displaystyle{ \Pr\Bigl(\sup_{x\in\mathbb R} \bigl(F_n(x) - F(x)\bigr) \gt \varepsilon \Bigr) \le e^{-2n\varepsilon^2}\qquad \text{for every }\varepsilon\geq\sqrt{\tfrac{1}{2n}\ln2}, }[/math]

which also implies a two-sided estimate[5]

[math]\displaystyle{ \Pr\Bigl(\sup_{x\in\mathbb R} |F_n(x) - F(x)| \gt \varepsilon \Bigr) \le 2e^{-2n\varepsilon^2}\qquad \text{for every }\varepsilon\gt 0. }[/math].

This strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence as n tends to infinity. It also estimates the tail probability of the Kolmogorov–Smirnov statistic. The inequalities above follow from the case where F corresponds to be the uniform distribution on [0,1] in view of the fact[6] that Fn has the same distributions as Gn(F) where Gn is the empirical distribution of U1, U2, …, Un where these are independent and Uniform(0,1), and noting that

[math]\displaystyle{ \sup_{x\in\mathbb R} |F_n(x) - F(x)| \; \stackrel{d}{=} \; \sup_{x \in \mathbb R} | G_n (F(x)) - F(x) | \le \sup_{0 \le t \le 1} | G_n (t) -t | , }[/math]

with equality if and only if F is continuous.

Multivariate case

In the multivariate case, X1, X2, …, Xn is an i.i.d. sequence of k-dimensional vectors. If Fn is the multivariate empirical cdf, then

[math]\displaystyle{ \Pr\Bigl(\sup_{t\in\mathbb R^k} |F_n(t) - F(t)| \gt \varepsilon \Bigr) \le (n+1)ke^{-2n\varepsilon^2} }[/math]

for every ε, n, k>0. The (n+1) term can be replaced with a 2 for any sufficiently large n.[4]

Kaplan-Meier estimator

The Dvoretzky–Kiefer–Wolfowitz inequality is obtained for the Kaplan-Meier estimator which is a right-censored data analog of the empirical distribution function

[math]\displaystyle{ \Pr\Bigl(\sqrt n\sup_{t\in[0,\infty)} |(1-G(t))(F_n(t) - F(t))| \gt \varepsilon \Bigr) \le 2.5 e^{-2\varepsilon^2 + C\varepsilon} }[/math]

for every [math]\displaystyle{ \varepsilon \gt 0 }[/math] and for some constant [math]\displaystyle{ C \lt \infty }[/math], where [math]\displaystyle{ F_n }[/math] is the Kaplan-Meier estimator, and [math]\displaystyle{ G }[/math] is the censoring distribution function.[7]

Building CDF bands

The Dvoretzky–Kiefer–Wolfowitz inequality is one method for generating CDF-based confidence bounds and producing a confidence band, which is sometimes called the Kolmogorov–Smirnov confidence band. The purpose of this confidence interval is to contain the entire CDF at the specified confidence level, while alternative approaches attempt to only achieve the confidence level on each individual point, which can allow for a tighter bound. The DKW bounds runs parallel to, and is equally above and below, the empirical CDF. The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the DKW inequality near the median of the distribution than near the endpoints of the distribution.

The interval that contains the true CDF, [math]\displaystyle{ F(x) }[/math], with probability [math]\displaystyle{ 1-\alpha }[/math] is often specified as

[math]\displaystyle{ F_n(x) - \varepsilon \le F(x) \le F_n(x) + \varepsilon \; \text{ where } \varepsilon = \sqrt{\frac{\ln{\frac{2}{\alpha}}}{2n}} }[/math]

which is also a special case of the asymptotic procedure for the multivariate case,[4] whereby one uses the following critical value

[math]\displaystyle{ \frac{d(\alpha,k)}{\sqrt{n}} = \sqrt{\frac{\ln{\frac{2k}{\alpha}}}{2n}} }[/math]

for the multivariate test; one may replace 2k with k(n+1) for a test that holds for all n; moreover, the multivariate test described by Naaman can be generalized to account for heterogeneity and dependence.

See also


  1. Dvoretzky, A.; Kiefer, J.; Wolfowitz, J. (1956), "Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator", Annals of Mathematical Statistics 27 (3): 642–669, doi:10.1214/aoms/1177728174, http://projecteuclid.org/euclid.aoms/1177728174 
  2. Massart, P. (1990), "The tight constant in the Dvoretzky–Kiefer–Wolfowitz inequality", Annals of Probability 18 (3): 1269–1283, doi:10.1214/aop/1176990746, http://projecteuclid.org/euclid.aop/1176990746 
  3. Birnbaum, Z. W.; McCarty, R. C. (1958). "A distribution-free upper confidence bound for Pr{Y<X}, based on independent samples of X and Y". Annals of Mathematical Statistics 29: 558–562. doi:10.1214/aoms/1177706631. http://projecteuclid.org/euclid.aoms/1177706631. 
  4. 4.0 4.1 4.2 Naaman, Michael (2021). "On the tight constant in the multivariate Dvoretzky-Kiefer-Wolfowitz inequality". Statistics and Probability Letters 173: 1-8. https://www.sciencedirect.com/science/article/pii/S016771522100050X. 
  5. Kosorok, M.R. (2008), "Chapter 11: Additional Empirical Process Results", Introduction to Empirical Processes and Semiparametric Inference, Springer, p. 210, ISBN 9780387749778 
  6. Shorack, G.R.; Wellner, J.A. (1986), Empirical Processes with Applications to Statistics, Wiley, ISBN 0-471-86725-X 
  7. Bitouze, D.; Laurent, B.; Massart, P. (1999), "A Dvoretzky-Kiefer-Wolfowitz type inequality for the Kaplan-Meier estimator", Annales de l'Institut Henri Poincaré B (Elsevier) 35: 735–763