Local asymptotic normality

From HandWiki
Revision as of 22:59, 6 February 2024 by DanMescoff (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics, local asymptotic normality is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after an appropriate rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of i.i.d sampling from a regular parametric model. The notion of local asymptotic normality was introduced by (Le Cam 1960) and is fundamental in the treatment of estimator and test efficiency.[1]

Definition

A sequence of parametric statistical models { Pn,θ: θ ∈ Θ } is said to be locally asymptotically normal (LAN) at θ if there exist matrices rn and Iθ and a random vector Δn,θ ~ N(0, Iθ) such that, for every converging sequence hnh,[2]

[math]\displaystyle{ \ln \frac{dP_{\!n,\theta+r_n^{-1}h_n}}{dP_{n,\theta}} = h'\Delta_{n,\theta} - \frac12 h'I_\theta\,h + o_{P_{n,\theta}}(1), }[/math]

where the derivative here is a Radon–Nikodym derivative, which is a formalised version of the likelihood ratio, and where o is a type of big O in probability notation. In other words, the local likelihood ratio must converge in distribution to a normal random variable whose mean is equal to minus one half the variance:

[math]\displaystyle{ \ln \frac{dP_{\!n,\theta+r_n^{-1}h_n}}{dP_{n,\theta}}\ \ \xrightarrow{d}\ \ \mathcal{N}\Big( {-\tfrac12} h'I_\theta\,h,\ h'I_\theta\,h\Big). }[/math]

The sequences of distributions [math]\displaystyle{ P_{\!n,\theta+r_n^{-1}h_n} }[/math] and [math]\displaystyle{ P_{n,\theta} }[/math] are contiguous.[2]

Example

The most straightforward example of a LAN model is an iid model whose likelihood is twice continuously differentiable. Suppose { X1, X2, …, Xn} is an iid sample, where each Xi has density function f(x, θ). The likelihood function of the model is equal to

[math]\displaystyle{ p_{n,\theta}(x_1,\ldots,x_n;\,\theta) = \prod_{i=1}^n f(x_i,\theta). }[/math]

If f is twice continuously differentiable in θ, then

[math]\displaystyle{ \begin{align} \ln p_{n,\theta+\delta\theta} &\approx \ln p_{n,\theta} + \delta\theta'\frac{\partial \ln p_{n,\theta}}{\partial\theta} + \frac12 \delta\theta' \frac{\partial^2 \ln p_{n,\theta}}{\partial\theta\,\partial\theta'} \delta\theta \\ &= \ln p_{n,\theta} + \delta\theta' \sum_{i=1}^n\frac{\partial \ln f(x_i,\theta)}{\partial\theta} + \frac12 \delta\theta' \bigg[\sum_{i=1}^n\frac{\partial^2 \ln f(x_i,\theta)}{\partial\theta\,\partial\theta'} \bigg]\delta\theta . \end{align} }[/math]

Plugging in [math]\displaystyle{ \delta\theta=h/\sqrt{n} }[/math], gives

[math]\displaystyle{ \ln \frac{p_{n,\theta+h/\sqrt{n}}}{p_{n,\theta}} = h' \Bigg(\frac{1}{\sqrt{n}} \sum_{i=1}^n\frac{\partial \ln f(x_i,\theta)}{\partial\theta}\Bigg) \;-\; \frac12 h' \Bigg( \frac1n \sum_{i=1}^n - \frac{\partial^2 \ln f(x_i,\theta)}{\partial\theta\,\partial\theta'} \Bigg) h \;+\; o_p(1). }[/math]

By the central limit theorem, the first term (in parentheses) converges in distribution to a normal random variable Δθ ~ N(0, Iθ), whereas by the law of large numbers the expression in second parentheses converges in probability to Iθ, which is the Fisher information matrix:

[math]\displaystyle{ I_\theta = \mathrm{E}\bigg[{- \frac{\partial^2 \ln f(X_i,\theta)}{\partial\theta\,\partial\theta'}}\bigg] = \mathrm{E}\bigg[\bigg(\frac{\partial \ln f(X_i,\theta)}{\partial\theta}\bigg)\bigg(\frac{\partial \ln f(X_i,\theta)}{\partial\theta}\bigg)'\,\bigg]. }[/math]

Thus, the definition of the local asymptotic normality is satisfied, and we have confirmed that the parametric model with iid observations and twice continuously differentiable likelihood has the LAN property.

See also

Notes

  1. Vaart, A. W. van der (1998-10-13). Asymptotic Statistics. Cambridge University Press. ISBN 978-0-511-80225-6. http://dx.doi.org/10.1017/cbo9780511802256. 
  2. 2.0 2.1 (van der Vaart 1998)

References

  • Ibragimov, I.A.; Has’minskiĭ, R.Z. (1981). Statistical estimation: asymptotic theory. Springer-Verlag. ISBN 0-387-90523-5. 
  • Le Cam, L. (1960). "Locally asymptotically normal families of distributions". University of California Publications in Statistics 3: 37–98. 
  • van der Vaart, A.W. (1998). Asymptotic statistics. Cambridge University Press. ISBN 978-0-521-78450-4.