Precision (statistics)

From HandWiki
Short description: Reciprocal of the statistical variance


In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, [math]\displaystyle{ P = \Sigma^{-1} }[/math].[1][2][3] For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the reciprocal of the variance, [math]\displaystyle{ p = \frac{1}{\sigma^2} }[/math].[4]

Other summary statistics of statistical dispersion also called precision (or imprecision[5][6]) include the reciprocal of the standard deviation, [math]\displaystyle{ p = \frac{1}{\sigma} }[/math];[3] the standard deviation itself and the relative standard deviation;[7] as well as the standard error[8] and the confidence interval (or its half-width, the margin of error).[9]

Usage

One particular use of the precision matrix is in the context of Bayesian analysis of the multivariate normal distribution: for example, Bernardo & Smith prefer to parameterise the multivariate normal distribution in terms of the precision matrix, rather than the covariance matrix, because of certain simplifications that then arise.[10] For instance, if both the prior and the likelihood have Gaussian form, and the precision matrix of both of these exist (because their covariance matrix is full rank and thus invertible), then the precision matrix of the posterior will simply be the sum of the precision matrices of the prior and the likelihood.

As the inverse of a Hermitian matrix, the precision matrix of real-valued random variables, if it exists, is positive definite and symmetrical.

Another reason the precision matrix may be useful is that if two dimensions [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] of a multivariate normal are conditionally independent, then the [math]\displaystyle{ ij }[/math] and [math]\displaystyle{ ji }[/math] elements of the precision matrix are [math]\displaystyle{ 0 }[/math]. This means that precision matrices tend to be sparse when many of the dimensions are conditionally independent, which can lead to computational efficiencies when working with them. It also means that precision matrices are closely related to the idea of partial correlation.

The precision matrix plays a central role in generalized least squares, compared to ordinary least squares, where [math]\displaystyle{ P }[/math] is the identity matrix, and to weighted least squares, where [math]\displaystyle{ P }[/math] is diagonal (the weight matrix).

Etymology

The term precision in this sense ("mensura praecisionis observationum") first appeared in the works of Gauss (1809) "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" (page 212). Gauss's definition differs from the modern one by a factor of [math]\displaystyle{ \sqrt2 }[/math]. He writes, for the density function of a normal distribution with precision [math]\displaystyle{ h }[/math] (reciprocal of standard deviation),

[math]\displaystyle{ \varphi\Delta = \frac h {\sqrt \pi}\, e^{-hh\Delta\Delta} . }[/math]

where [math]\displaystyle{ h h = h^2 }[/math] (see: Exponentiation). Later Whittaker & Robinson (1924) "Calculus of observations" called this quantity the modulus (of precision), but this term has dropped out of use.[11]

References

  1. DeGroot, Morris H. (1969). Optimal Statistical Decisions. New York: McGraw-Hill. p. 56. 
  2. Davidson, Russell; MacKinnon, James G. (1993). Estimation and Inference in Econometrics. New York: Oxford University Press. p. 144. ISBN 0-19-506011-3. https://books.google.com/books?id=Ot6DByCF6osC&pg=PA144. 
  3. 3.0 3.1 Dodge, Y. (2003). The Oxford Dictionary of Statistical Terms. Oxford University Press. ISBN 0-19-920613-9. https://archive.org/details/oxforddictionary0000unse. 
  4. Bolstad, W.M.; Curran, J.M. (2016). Introduction to Bayesian Statistics. Wiley. p. 221. ISBN 978-1-118-59315-8. https://books.google.com/books?id=6GjmDAAAQBAJ&pg=PA221. Retrieved 2022-08-13. 
  5. Natrella, M.G. (2013) (in it). Experimental Statistics. Dover Books on Mathematics. Dover Publications. p. 21-PA14. ISBN 978-0-486-15455-8. https://books.google.com/books?id=eWBzIn9r1ZUC&pg=SA21-PA14. Retrieved 2022-08-14. 
  6. Balakrishnan, N. (2009). Methods and Applications of Statistics in the Life and Health Sciences. Methods and Applications of Statistics. Wiley. p. 537. ISBN 978-0-470-40509-3. https://books.google.com/books?id=ZkfeQu8wnHgC&pg=PA537. Retrieved 2022-08-14. 
  7. Ellison, S.L.R.; Farrant, T.J.; Barwick, V. (2009). Practical Statistics for the Analytical Scientist: A Bench Guide. Valid Analytical Measurement. Royal Society of Chemistry. p. 145. ISBN 978-0-85404-131-2. https://books.google.com/books?id=buXUWfo7gngC&pg=PA145. Retrieved 2022-08-14. 
  8. Wilburn, A.J. (1984). Practical Statistical Sampling for Auditors. Statistics: A Series of Textbooks and Monographs. Taylor & Francis. p. 62. ISBN 978-0-8247-7124-9. https://books.google.com/books?id=V0WETIr0KSEC&pg=PA62. Retrieved 2022-08-14. 
  9. Cumming, G. (2013). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Multivariate Applications Series. Taylor & Francis. p. 366. ISBN 978-1-136-65918-8. https://books.google.com/books?id=fgqR3aBtuVkC&pg=PT366. Retrieved 2022-08-14. 
  10. Bernardo, J. M. & Smith, A.F.M. (2000) Bayesian Theory, Wiley ISBN:0-471-49464-X
  11. "Earliest known uses of some of the words in mathematics". http://jeff560.tripod.com/m.html.