Stein's method
Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. It was introduced by Charles Stein, who first published it in 1972,[1] to obtain a bound between the distribution of a sum of [math]\displaystyle{ m }[/math]-dependent sequence of random variables and a standard normal distribution in the Kolmogorov (uniform) metric and hence to prove not only a central limit theorem, but also bounds on the rates of convergence for the given metric.
History
At the end of the 1960s, unsatisfied with the by-then known proofs of a specific central limit theorem, Charles Stein developed a new way of proving the theorem for his statistics lecture.[2] His seminal paper was presented in 1970 at the sixth Berkeley Symposium and published in the corresponding proceedings.[1]
Later, his Ph.D. student Louis Chen Hsiao Yun modified the method so as to obtain approximation results for the Poisson distribution;[3] therefore the Stein method applied to the problem of Poisson approximation is often referred to as the Stein–Chen method.
Probably the most important contributions are the monograph by Stein (1986), where he presents his view of the method and the concept of auxiliary randomisation, in particular using exchangeable pairs, and the articles by Barbour (1988) and Götze (1991), who introduced the so-called generator interpretation, which made it possible to easily adapt the method to many other probability distributions. An important contribution was also an article by Bolthausen (1984) on the so-called combinatorial central limit theorem.[citation needed]
In the 1990s the method was adapted to a variety of distributions, such as Gaussian processes by Barbour (1990), the binomial distribution by Ehm (1991), Poisson processes by Barbour and Brown (1992), the Gamma distribution by Luk (1994), and many others.
The method gained further popularity in the machine learning community in the mid 2010s, following the development of computable Stein discrepancies and the diverse applications and algorithms based on them.
The basic approach
Probability metrics
Stein's method is a way to bound the distance between two probability distributions using a specific probability metric.
Let the metric be given in the form
- [math]\displaystyle{ (1.1)\quad d(P,Q) = \sup_{h\in\mathcal{H}}\left|\int h \, dP - \int h \, dQ \right| = \sup_{h\in\mathcal{H}}\left|E h(W) - E h(Y) \right| }[/math]
Here, [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] are probability measures on a measurable space [math]\displaystyle{ \mathcal{X} }[/math], [math]\displaystyle{ W }[/math] and [math]\displaystyle{ Y }[/math] are random variables with distribution [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] respectively, [math]\displaystyle{ E }[/math] is the usual expectation operator and [math]\displaystyle{ \mathcal{H} }[/math] is a set of functions from [math]\displaystyle{ \mathcal{X} }[/math] to the set of real numbers. Set [math]\displaystyle{ \mathcal{H} }[/math] has to be large enough, so that the above definition indeed yields a metric.
Important examples are the total variation metric, where we let [math]\displaystyle{ \mathcal{H} }[/math] consist of all the indicator functions of measurable sets, the Kolmogorov (uniform) metric for probability measures on the real numbers, where we consider all the half-line indicator functions, and the Lipschitz (first order Wasserstein; Kantorovich) metric, where the underlying space is itself a metric space and we take the set [math]\displaystyle{ \mathcal{H} }[/math] to be all Lipschitz-continuous functions with Lipschitz-constant 1. However, note that not every metric can be represented in the form (1.1).
In what follows [math]\displaystyle{ P }[/math] is a complicated distribution (e.g., the distribution of a sum of dependent random variables), which we want to approximate by a much simpler and tractable distribution [math]\displaystyle{ Q }[/math] (e.g., the standard normal distribution).
The Stein operator
We assume now that the distribution [math]\displaystyle{ Q }[/math] is a fixed distribution; in what follows we shall in particular consider the case where [math]\displaystyle{ Q }[/math] is the standard normal distribution, which serves as a classical example.
First of all, we need an operator [math]\displaystyle{ \mathcal{A} }[/math], which acts on functions [math]\displaystyle{ f }[/math] from [math]\displaystyle{ \mathcal{X} }[/math] to the set of real numbers and 'characterizes' distribution [math]\displaystyle{ Q }[/math] in the sense that the following equivalence holds:
- [math]\displaystyle{ (2.1)\quad E ((\mathcal{A}f)(Y)) = 0\text{ for all } f \quad \iff \quad Y \text{ has distribution } Q. }[/math]
We call such an operator the Stein operator.
For the standard normal distribution, Stein's lemma yields such an operator:
- [math]\displaystyle{ (2.2)\quad E\left(f'(Y)-Yf(Y)\right) = 0\text{ for all } f\in C_b^1 \quad \iff \quad Y \text{ has standard normal distribution.} }[/math]
Thus, we can take
- [math]\displaystyle{ (2.3)\quad (\mathcal{A}f)(x) = f'(x) - x f(x). }[/math]
There are in general infinitely many such operators and it still remains an open question, which one to choose. However, it seems that for many distributions there is a particular good one, like (2.3) for the normal distribution.
There are different ways to find Stein operators.[4]
The Stein equation
[math]\displaystyle{ P }[/math] is close to [math]\displaystyle{ Q }[/math] with respect to [math]\displaystyle{ d }[/math] if the difference of expectations in (1.1) is close to 0. We hope now that the operator [math]\displaystyle{ \mathcal{A} }[/math] exhibits the same behavior: if [math]\displaystyle{ P=Q }[/math] then [math]\displaystyle{ E (\mathcal{A}f)(W)=0 }[/math], and hopefully if [math]\displaystyle{ P\approx Q }[/math] we have [math]\displaystyle{ E (\mathcal{A}f)(W) \approx 0 }[/math].
It is usually possible to define a function [math]\displaystyle{ f = f_h }[/math] such that
- [math]\displaystyle{ (3.1)\quad (\mathcal{A}f)(x)= h(x) - E[h(Y)] \qquad\text{ for all } x . }[/math]
We call (3.1) the Stein equation. Replacing [math]\displaystyle{ x }[/math] by [math]\displaystyle{ W }[/math] and taking expectation with respect to [math]\displaystyle{ W }[/math], we get
- [math]\displaystyle{ (3.2)\quad E(\mathcal{A}f)(W)=E[h(W)] - E[h(Y)]. }[/math]
Now all the effort is worthwhile only if the left-hand side of (3.2) is easier to bound than the right hand side. This is, surprisingly, often the case.
If [math]\displaystyle{ Q }[/math] is the standard normal distribution and we use (2.3), then the corresponding Stein equation is
- [math]\displaystyle{ (3.3)\quad f'(x) - x f(x) = h(x) - E[h(Y)] \qquad\text{for all }x . }[/math]
If probability distribution Q has an absolutely continuous (with respect to the Lebesgue measure) density q, then[4]
- [math]\displaystyle{ (3.4)\quad (\mathcal{A}f)(x) = f'(x)+f(x)q'(x)/q(x). }[/math]
Solving the Stein equation
Analytic methods. Equation (3.3) can be easily solved explicitly:
- [math]\displaystyle{ (4.1)\quad f(x) = e^{x^2/2}\int_{-\infty}^x [h(s)-E h(Y)]e^{-s^2/2} \, ds. }[/math]
Generator method. If [math]\displaystyle{ \mathcal{A} }[/math] is the generator of a Markov process [math]\displaystyle{ (Z_t)_{t\geq 0} }[/math] (see Barbour (1988), Götze (1991)), then the solution to (3.2) is
- [math]\displaystyle{ (4.2)\quad f(x) = -\int_0^\infty [E^x h(Z_t)-E h(Y)] \, dt, }[/math]
where [math]\displaystyle{ E^x }[/math] denotes expectation with respect to the process [math]\displaystyle{ Z }[/math] being started in [math]\displaystyle{ x }[/math]. However, one still has to prove that the solution (4.2) exists for all desired functions [math]\displaystyle{ h\in\mathcal{H} }[/math].
Properties of the solution to the Stein equation
Usually, one tries to give bounds on [math]\displaystyle{ f }[/math] and its derivatives (or differences) in terms of [math]\displaystyle{ h }[/math] and its derivatives (or differences), that is, inequalities of the form
- [math]\displaystyle{ (5.1)\quad \|D^k f\| \leq C_{k,l} \|D^l h\|, }[/math]
for some specific [math]\displaystyle{ k,l=0,1,2,\dots }[/math] (typically [math]\displaystyle{ k\geq l }[/math] or [math]\displaystyle{ k\geq l-1 }[/math], respectively, depending on the form of the Stein operator), where often [math]\displaystyle{ \|\cdot\| }[/math] is the supremum norm. Here, [math]\displaystyle{ D^k }[/math] denotes the differential operator, but in discrete settings it usually refers to a difference operator. The constants [math]\displaystyle{ C_{k,l} }[/math] may contain the parameters of the distribution [math]\displaystyle{ Q }[/math]. If there are any, they are often referred to as Stein factors.
In the case of (4.1) one can prove for the supremum norm that
- [math]\displaystyle{ (5.2)\quad \|f\|_\infty\leq \min\left\{\sqrt{\pi/2}\|h\|_\infty,2\|h'\|_\infty\right\},\quad \|f'\|_\infty\leq \min\{2\|h\|_\infty,4\|h'\|_\infty\},\quad \|f''\|_\infty\leq 2 \|h'\|_\infty, }[/math]
where the last bound is of course only applicable if [math]\displaystyle{ h }[/math] is differentiable (or at least Lipschitz-continuous, which, for example, is not the case if we regard the total variation metric or the Kolmogorov metric!). As the standard normal distribution has no extra parameters, in this specific case the constants are free of additional parameters.
If we have bounds in the general form (5.1), we usually are able to treat many probability metrics together. One can often start with the next step below, if bounds of the form (5.1) are already available (which is the case for many distributions).
An abstract approximation theorem
We are now in a position to bound the left hand side of (3.1). As this step heavily depends on the form of the Stein operator, we directly regard the case of the standard normal distribution.
At this point we could directly plug in random variable [math]\displaystyle{ W }[/math], which we want to approximate, and try to find upper bounds. However, it is often fruitful to formulate a more general theorem. Consider here the case of local dependence.
Assume that [math]\displaystyle{ W=\sum_{i=1}^n X_i }[/math] is a sum of random variables such that the [math]\displaystyle{ E[W] = 0 }[/math] and variance [math]\displaystyle{ \operatorname{var}[W] = 1 }[/math]. Assume that, for every [math]\displaystyle{ i=1,\dots,n }[/math], there is a set [math]\displaystyle{ A_i\subset\{1,2,\dots,n\} }[/math], such that [math]\displaystyle{ X_i }[/math] is independent of all the random variables [math]\displaystyle{ X_j }[/math] with [math]\displaystyle{ j\not\in A_i }[/math]. We call this set the 'neighborhood' of [math]\displaystyle{ X_i }[/math]. Likewise let [math]\displaystyle{ B_i\subset\{1,2,\dots,n\} }[/math] be a set such that all [math]\displaystyle{ X_j }[/math] with [math]\displaystyle{ j\in A_i }[/math] are independent of all [math]\displaystyle{ X_k }[/math], [math]\displaystyle{ k\not\in B_i }[/math]. We can think of [math]\displaystyle{ B_i }[/math] as the neighbors in the neighborhood of [math]\displaystyle{ X_i }[/math], a second-order neighborhood, so to speak. For a set [math]\displaystyle{ A\subset\{1,2,\dots,n\} }[/math] define now the sum [math]\displaystyle{ X_A := \sum_{j\in A} X_j }[/math].
Using Taylor expansion, it is possible to prove that
- [math]\displaystyle{ (6.1)\quad \left|E(f'(W)-Wf(W))\right| \leq \|f''\|_\infty\sum_{i=1}^n \left( \frac{1}{2}E|X_i X_{A_i}^2| + E|X_i X_{A_i}X_{B_i \setminus A_i}| + E|X_i X_{A_i}| E|X_{B_i}| \right) }[/math]
Note that, if we follow this line of argument, we can bound (1.1) only for functions where [math]\displaystyle{ \|h'\|_{\infty} }[/math] is bounded because of the third inequality of (5.2) (and in fact, if [math]\displaystyle{ h }[/math] has discontinuities, so will [math]\displaystyle{ f'' }[/math]). To obtain a bound similar to (6.1) which contains only the expressions [math]\displaystyle{ \|f\|_\infty }[/math] and [math]\displaystyle{ \|f'\|_\infty }[/math], the argument is much more involved and the result is not as simple as (6.1); however, it can be done.
Theorem A. If [math]\displaystyle{ W }[/math] is as described above, we have for the Lipschitz metric [math]\displaystyle{ d_W }[/math] that
- [math]\displaystyle{ (6.2)\quad d_W(\mathcal{L}(W),N(0,1)) \leq 2\sum_{i=1}^n \left( \frac{1}{2}E|X_i X_{A_i}^2| + E|X_i X_{A_i}X_{B_i \setminus A_i}| + E|X_i X_{A_i}| E|X_{B_i}| \right). }[/math]
Proof. Recall that the Lipschitz metric is of the form (1.1) where the functions [math]\displaystyle{ h }[/math] are Lipschitz-continuous with Lipschitz-constant 1, thus [math]\displaystyle{ \|h'\|\leq 1 }[/math]. Combining this with (6.1) and the last bound in (5.2) proves the theorem.
Thus, roughly speaking, we have proved that, to calculate the Lipschitz-distance between a [math]\displaystyle{ W }[/math] with local dependence structure and a standard normal distribution, we only need to know the third moments of [math]\displaystyle{ X_i }[/math] and the size of the neighborhoods [math]\displaystyle{ A_i }[/math] and [math]\displaystyle{ B_i }[/math].
Application of the theorem
We can treat the case of sums of independent and identically distributed random variables with Theorem A.
Assume that [math]\displaystyle{ E X_i = 0 }[/math], [math]\displaystyle{ \operatorname{var} X_i = 1 }[/math] and [math]\displaystyle{ W=n^{-1/2}\sum X_i }[/math]. We can take [math]\displaystyle{ A_i=B_i=\{i\} }[/math]. From Theorem A we obtain that
- [math]\displaystyle{ (7.1)\quad d_W(\mathcal{L}(W),N(0,1)) \leq \frac{5 E|X_1|^3}{n^{1/2}}. }[/math]
For sums of random variables another approach related to Steins Method is known as the zero bias transform.
Connections to other methods
- Lindeberg's device. Lindeberg (1922) introduced a device, where the difference [math]\displaystyle{ E h(X_1+\cdots+X_n)-E h(Y_1+\cdots+Y_n) }[/math] is represented as a sum of step-by-step differences.
- Tikhomirov's method. Clearly the approach via (1.1) and (3.1) does not involve characteristic functions. However, Tikhomirov (1980) presented a proof of a central limit theorem based on characteristic functions and a differential operator similar to (2.3). The basic observation is that the characteristic function [math]\displaystyle{ \psi(t) }[/math] of the standard normal distribution satisfies the differential equation [math]\displaystyle{ \psi'(t)+t\psi(t) = 0 }[/math] for all [math]\displaystyle{ t }[/math]. Thus, if the characteristic function [math]\displaystyle{ \psi_W(t) }[/math] of [math]\displaystyle{ W }[/math] is such that [math]\displaystyle{ \psi'_W(t)+t\psi_W(t)\approx 0 }[/math] we expect that [math]\displaystyle{ \psi_W(t)\approx \psi(t) }[/math] and hence that [math]\displaystyle{ W }[/math] is close to the normal distribution. Tikhomirov states in his paper that he was inspired by Stein's seminal paper.
See also
Notes
- ↑ 1.0 1.1 "A bound for the error in the normal approximation to the distribution of a sum of dependent random variables". Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2. 6. University of California Press. 1972. pp. 583–602. http://projecteuclid.org/euclid.bsmsp/1200514239.
- ↑ Charles Stein: The Invariant, the Direct and the "Pretentious" . Interview given in 2003 in Singapore
- ↑ "Poisson approximation for dependent trials". Annals of Probability 3 (3): 534–545. 1975. doi:10.1214/aop/1176996359.
- ↑ 4.0 4.1 Novak, S.Y. (2011). Extreme Value Methods with Applications to Finance. Monographs on Statistics and Applied Probability. 122. CRC Press. Ch. 12. ISBN 978-1-43983-574-6.
References
- Barbour, A. D. (1988). "Stein's method and Poisson process convergence". Journal of Applied Probability 25: 175–184. doi:10.2307/3214155.
- Barbour, A. D. (1990). "Stein's method for diffusion approximations". Probability Theory and Related Fields 84 (3): 297–322. doi:10.1007/BF01197887.
- Barbour, A. D.; Brown, T. C. (1992). "Stein's method and point process approximation". Stochastic Processes and Their Applications 43 (1): 9–31. doi:10.1016/0304-4149(92)90073-Y.
- Bolthausen, E. (1984). "An estimate of the remainder in a combinatorial central limit theorem". Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 66 (3): 379–386. doi:10.1007/BF00533704.
- Ehm, W. (1991). "Binomial approximation to the Poisson binomial distribution". Statistics & Probability Letters 11 (1): 7–16. doi:10.1016/0167-7152(91)90170-V.
- Götze, F. (1991). "On the rate of convergence in the multivariate CLT". The Annals of Probability 19 (2): 724–739. doi:10.1214/aop/1176990448.
- Lindeberg, J. W. (1922). "Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechung". Mathematische Zeitschrift 15 (1): 211–225. doi:10.1007/BF01494395. https://zenodo.org/record/2412242.
- Luk, H. M. (1994). Stein's method for the gamma distribution and related statistical applications. Dissertation.
- Novak, S. Y. (2011). Extreme value methods with applications to finance. Monographs on Statistics and Applied Probability. 122. CRC Press. ISBN 978-1-43983-574-6.
- Stein, C. (1986). Approximate computation of expectations. Lecture Notes-Monograph Series. 7. Institute of Mathematical Statistics. ISBN 0-940600-08-0.
- Tikhomirov, A. N. (1980). "Convergence rate in the central limit theorem for weakly dependent random variables". Teoriya Veroyatnostei i ee Primeneniya 25: 800–818. English translation in Tikhomirov, A. N. (1981). "On the Convergence Rate in the Central Limit Theorem for Weakly Dependent Random Variables". Theory of Probability & Its Applications 25 (4): 790–809. doi:10.1137/1125092.
Literature
The following text is advanced, and gives a comprehensive overview of the normal case
- Chen, L.H.Y., Goldstein, L., and Shao, Q.M (2011). Normal approximation by Stein's method. www.springer.com. ISBN 978-3-642-15006-7.
Another advanced book, but having some introductory character, is
- ed. Barbour, A.D. and Chen, L.H.Y. (2005). An introduction to Stein's method. Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore. 4. Singapore University Press. ISBN 981-256-280-X.
A standard reference is the book by Stein,
- Stein, C. (1986). Approximate computation of expectations. Institute of Mathematical Statistics Lecture Notes, Monograph Series, 7. Hayward, Calif.: Institute of Mathematical Statistics. ISBN 0-940600-08-0.
which contains a lot of interesting material, but may be a little hard to understand at first reading.
Despite its age, there are few standard introductory books about Stein's method available. The following recent textbook has a chapter (Chapter 2) devoted to introducing Stein's method:
- Ross, Sheldon; Peköz, Erol (2007). A second course in probability. ISBN 978-0-9795704-0-7.
Although the book
- Barbour, A. D. and Holst, L. and Janson, S. (1992). Poisson approximation. Oxford Studies in Probability. 2. The Clarendon Press Oxford University Press. ISBN 0-19-852235-5.
is by large parts about Poisson approximation, it contains nevertheless a lot of information about the generator approach, in particular in the context of Poisson process approximation.
The following textbook has a chapter (Chapter 10) devoted to introducing Stein's method of Poisson approximation:
- Sheldon M. Ross (1995). Stochastic Processes. Wiley. ISBN 978-0471120629.
Original source: https://en.wikipedia.org/wiki/Stein's method.
Read more |