Observed information
In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher information.
Definition
Suppose we observe random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math], independent and identically distributed with density f(X; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters [math]\displaystyle{ \theta }[/math] given the data [math]\displaystyle{ X_1,\ldots,X_n }[/math] is
- [math]\displaystyle{ \ell(\theta | X_1,\ldots,X_n) = \sum_{i=1}^n \log f(X_i| \theta) }[/math].
We define the observed information matrix at [math]\displaystyle{ \theta^{*} }[/math] as
- [math]\displaystyle{ \mathcal{J}(\theta^*) = - \left. \nabla \nabla^{\top} \ell(\theta) \right|_{\theta=\theta^*} }[/math]
- [math]\displaystyle{ = - \left. \left( \begin{array}{cccc} \tfrac{\partial^2}{\partial \theta_1^2} & \tfrac{\partial^2}{\partial \theta_1 \partial \theta_2} & \cdots & \tfrac{\partial^2}{\partial \theta_1 \partial \theta_p} \\ \tfrac{\partial^2}{\partial \theta_2 \partial \theta_1} & \tfrac{\partial^2}{\partial \theta_2^2} & \cdots & \tfrac{\partial^2}{\partial \theta_2 \partial \theta_p} \\ \vdots & \vdots & \ddots & \vdots \\ \tfrac{\partial^2}{\partial \theta_p \partial \theta_1} & \tfrac{\partial^2}{\partial \theta_p \partial \theta_2} & \cdots & \tfrac{\partial^2}{\partial \theta_p^2} \\ \end{array} \right) \ell(\theta) \right|_{\theta = \theta^*} }[/math]
Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator, the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction.[1] The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted.
Alternative definition
Andrew Gelman, David Dunson and Donald Rubin[2] define observed information instead in terms of the parameters' posterior probability, [math]\displaystyle{ p(\theta|y) }[/math]:
[math]\displaystyle{ I(\theta) = - \frac{d^2}{d\theta^2} \log p(\theta|y) }[/math]
Fisher information
The Fisher information [math]\displaystyle{ \mathcal{I}(\theta) }[/math] is the expected value of the observed information given a single observation [math]\displaystyle{ X }[/math] distributed according to the hypothetical model with parameter [math]\displaystyle{ \theta }[/math]:
- [math]\displaystyle{ \mathcal{I}(\theta) = \mathrm{E}(\mathcal{J}(\theta)) }[/math].
Comparison with the expected information
The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley[3] provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of [math]\displaystyle{ O(n^{-3/2}) }[/math] is ignored.[4] In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness.
However, when the construction of confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for confidence-interval constructions of scalar parameters in the mean squared error sense.[5] This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix.[6]
See also
- Fisher information matrix
- Fisher information metric
References
- ↑ Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9
- ↑ Gelman, Andrew; Carlin, John; Stern, Hal; Dunson, David; Vehtari, Aki; Rubin, Donald (2014). Bayesian Data Analysis (3rd ed.). p. 84. http://www.stat.columbia.edu/~gelman/book/.
- ↑ "Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher Information". Biometrika 65 (3): 457–487. 1978. doi:10.1093/biomet/65.3.457.
- ↑ Lindsay, Bruce G.; Li, Bing (1 October 1997). "On second-order optimality of the observed Fisher information". The Annals of Statistics 25 (5). doi:10.1214/aos/1069362393.
- ↑ Yuan, Xiangyu; Spall, James C. (July 2020). "Confidence Intervals with Expected and Observed Fisher Information in the Scalar Case". 2020 American Control Conference (ACC). pp. 2599–2604. doi:10.23919/ACC45564.2020.9147324. ISBN 978-1-5386-8266-1.
- ↑ Jiang, Sihang; Spall, James C. (24 March 2021). "Comparison between Expected and Observed Fisher Information in Interval Estimation". 2021 55th Annual Conference on Information Sciences and Systems (CISS). pp. 1–6. doi:10.1109/CISS50987.2021.9400253. ISBN 978-1-6654-1268-1.
Original source: https://en.wikipedia.org/wiki/Observed information.
Read more |