Slutsky's theorem

From HandWiki
Short description: Theorem in probability theory

In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.[1]

The theorem was named after Eugen Slutsky.[2] Slutsky's theorem is also attributed to Harald Cramér.[3]

Statement

Let [math]\displaystyle{ X_n, Y_n }[/math] be sequences of scalar/vector/matrix random elements. If [math]\displaystyle{ X_n }[/math] converges in distribution to a random element [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y_n }[/math] converges in probability to a constant [math]\displaystyle{ c }[/math], then

  • [math]\displaystyle{ X_n + Y_n \ \xrightarrow{d}\ X + c ; }[/math]
  • [math]\displaystyle{ X_nY_n \ \xrightarrow{d}\ Xc ; }[/math]
  • [math]\displaystyle{ X_n/Y_n \ \xrightarrow{d}\ X/c, }[/math]   provided that c is invertible,

where [math]\displaystyle{ \xrightarrow{d} }[/math] denotes convergence in distribution.

Notes:

  1. The requirement that Yn converges to a constant is important — if it were to converge to a non-degenerate random variable, the theorem would be no longer valid. For example, let [math]\displaystyle{ X_n \sim {\rm Uniform}(0,1) }[/math] and [math]\displaystyle{ Y_n = -X_n }[/math]. The sum [math]\displaystyle{ X_n + Y_n = 0 }[/math] for all values of n. Moreover, [math]\displaystyle{ Y_n \, \xrightarrow{d} \, {\rm Uniform}(-1,0) }[/math], but [math]\displaystyle{ X_n + Y_n }[/math] does not converge in distribution to [math]\displaystyle{ X + Y }[/math], where [math]\displaystyle{ X \sim {\rm Uniform}(0,1) }[/math], [math]\displaystyle{ Y \sim {\rm Uniform}(-1,0) }[/math], and [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are independent.[4]
  2. The theorem remains valid if we replace all convergences in distribution with convergences in probability.

Proof

This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (Xc) (see here).

Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 are continuous (for the last function to be continuous, y has to be invertible).

See also

References

  1. Goldberger, Arthur S. (1964). Econometric Theory. New York: Wiley. pp. 117–120. https://archive.org/details/econometrictheor0000gold. 
  2. "Über stochastische Asymptoten und Grenzwerte" (in de). Metron 5 (3): 3–89. 1925. 
  3. Slutsky's theorem is also called Cramér's theorem according to Remark 11.1 (page 249) of Gut, Allan (2005). Probability: a graduate course. Springer-Verlag. ISBN 0-387-22833-0. 
  4. See Zeng, Donglin (Fall 2018). "Large Sample Theory of Random Variables (lecture slides)". Advanced Probability and Statistical Inference I (BIOS 760). University of North Carolina at Chapel Hill. Slide 59. https://www.bios.unc.edu/~dzeng/BIOS760/ChapC_Slide.pdf#page=59. 

Further reading