Jeffreys prior
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys,[1] is a non-informative prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:
- [math]\displaystyle{ p\left(\vec\theta\right) \propto \sqrt{\det \mathcal{I}\left(\vec\theta\right)}.\, }[/math]
It has the key feature that it is invariant under a change of coordinates for the parameter vector [math]\displaystyle{ \vec\theta }[/math]. That is, the relative probability assigned to a volume of a probability space using a Jeffreys prior will be the same regardless of the parameterization used to define the Jeffreys prior. This makes it of special interest for use with scale parameters.[2] As a concrete example, a Bernoulli distribution can be parametrized by the probability of occurrence Template:Variable, or by the odds ratio. A naive uniform prior in this case is not invariant to this reparametrization, but the Jeffreys prior is.
In maximum likelihood estimation of exponential family models, penalty terms based on the Jeffreys prior were shown to reduce asymptotic bias in point estimates.[3][4]
Reparameterization
One-parameter case
If [math]\displaystyle{ \theta }[/math] and [math]\displaystyle{ \varphi }[/math] are two possible parametrizations of a statistical model, and [math]\displaystyle{ \theta }[/math] is a continuously differentiable function of [math]\displaystyle{ \varphi }[/math], we say that the prior [math]\displaystyle{ p_\theta(\theta) }[/math] is "invariant" under a reparametrization if
- [math]\displaystyle{ p_\varphi(\varphi) = p_\theta(\theta) \left|\frac{d\theta}{d\varphi}\right|, }[/math]
that is, if the priors [math]\displaystyle{ p_\theta(\theta) }[/math] and [math]\displaystyle{ p_\varphi(\varphi) }[/math] are related by the usual change of variables theorem.
Since the Fisher information transforms under reparametrization as
- [math]\displaystyle{ I_\varphi(\varphi) = I_\theta(\theta) \left( \frac{d\theta}{d\varphi} \right)^2, }[/math]
defining the priors as [math]\displaystyle{ p_\varphi(\varphi) \propto \sqrt{I_\varphi(\varphi)} }[/math] and [math]\displaystyle{ p_\theta(\theta) \propto \sqrt{I_\theta(\theta)} }[/math] gives us the desired "invariance".[5]
Multiple-parameter case
Analogous to the one-parameter case, let [math]\displaystyle{ \vec\theta }[/math] and [math]\displaystyle{ \vec\varphi }[/math] be two possible parametrizations of a statistical model, with [math]\displaystyle{ \vec\theta }[/math] a continuously differentiable function of [math]\displaystyle{ \vec\varphi }[/math]. We call the prior [math]\displaystyle{ p_\theta(\vec\theta) }[/math] "invariant" under reparametrization if
- [math]\displaystyle{ p_\varphi(\vec\varphi) = p_\theta(\vec\theta) \det J, }[/math]
where [math]\displaystyle{ J }[/math] is the Jacobian matrix with entries
- [math]\displaystyle{ J_{ij} = \frac {\partial \theta_i}{\partial \varphi_j}. }[/math]
Since the Fisher information matrix transforms under reparametrization as
- [math]\displaystyle{ I_\varphi(\vec\varphi) = J^T I_\theta(\vec\theta) J, }[/math]
we have that
- [math]\displaystyle{ \det I_\varphi(\varphi) = \det I_\theta(\theta) (\det J)^2 }[/math]
and thus defining the priors as [math]\displaystyle{ p_\varphi(\vec\varphi) \propto \sqrt{\det I_\varphi(\vec\varphi)} }[/math] and [math]\displaystyle{ p_\theta(\vec\theta) \propto \sqrt{\det I_\theta(\vec\theta)} }[/math] gives us the desired "invariance".
Attributes
From a practical and mathematical standpoint, a valid reason to use this non-informative prior instead of others, like the ones obtained through a limit in conjugate families of distributions, is that the relative probability of a volume of the probability space is not dependent upon the set of parameter variables that is chosen to describe parameter space.
Sometimes the Jeffreys prior cannot be normalized, and is thus an improper prior. For example, the Jeffreys prior for the distribution mean is uniform over the entire real line in the case of a Gaussian distribution of known variance.
Use of the Jeffreys prior violates the strong version of the likelihood principle, which is accepted by many, but by no means all, statisticians. When using the Jeffreys prior, inferences about [math]\displaystyle{ \vec\theta }[/math] depend not just on the probability of the observed data as a function of [math]\displaystyle{ \vec\theta }[/math], but also on the universe of all possible experimental outcomes, as determined by the experimental design, because the Fisher information is computed from an expectation over the chosen universe. Accordingly, the Jeffreys prior, and hence the inferences made using it, may be different for two experiments involving the same [math]\displaystyle{ \vec\theta }[/math] parameter even when the likelihood functions for the two experiments are the same—a violation of the strong likelihood principle.
Minimum description length
In the minimum description length approach to statistics the goal is to describe data as compactly as possible where the length of a description is measured in bits of the code used. For a parametric family of distributions one compares a code with the best code based on one of the distributions in the parameterized family. The main result is that in exponential families, asymptotically for large sample size, the code based on the distribution that is a mixture of the elements in the exponential family with the Jeffreys prior is optimal. This result holds if one restricts the parameter set to a compact subset in the interior of the full parameter space[citation needed]. If the full parameter is used a modified version of the result should be used.
Examples
The Jeffreys prior for a parameter (or a set of parameters) depends upon the statistical model.
Gaussian distribution with mean parameter
For the Gaussian distribution of the real value [math]\displaystyle{ x }[/math]
- [math]\displaystyle{ f(x\mid\mu) = \frac{e^{-(x - \mu)^2 / 2\sigma^2}}{\sqrt{2 \pi \sigma^2}} }[/math]
with [math]\displaystyle{ \sigma }[/math] fixed, the Jeffreys prior for the mean [math]\displaystyle{ \mu }[/math] is
- [math]\displaystyle{ \begin{align} p(\mu) & \propto \sqrt{I(\mu)} = \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\mu} \log f(x\mid\mu) \right)^2\right]} = \sqrt{\operatorname{E}\!\left[ \left( \frac{x - \mu}{\sigma^2} \right)^2 \right]} \\ & = \sqrt{\int_{-\infty}^{+\infty} f(x\mid\mu) \left(\frac{x-\mu}{\sigma^2}\right)^2 dx} = \sqrt{\sigma^2/\sigma^4} \propto 1.\end{align} }[/math]
That is, the Jeffreys prior for [math]\displaystyle{ \mu }[/math] does not depend upon [math]\displaystyle{ \mu }[/math]; it is the unnormalized uniform distribution on the real line — the distribution that is 1 (or some other fixed constant) for all points. This is an improper prior, and is, up to the choice of constant, the unique translation-invariant distribution on the reals (the Haar measure with respect to addition of reals), corresponding to the mean being a measure of location and translation-invariance corresponding to no information about location.
Gaussian distribution with standard deviation parameter
For the Gaussian distribution of the real value [math]\displaystyle{ x }[/math]
- [math]\displaystyle{ f(x\mid\sigma) = \frac{e^{-(x - \mu)^2 / 2 \sigma^2}}{\sqrt{2 \pi \sigma^2}}, }[/math]
with [math]\displaystyle{ \mu }[/math] fixed, the Jeffreys prior for the standard deviation [math]\displaystyle{ \sigma \gt 0 }[/math] is
- [math]\displaystyle{ \begin{align}p(\sigma) & \propto \sqrt{I(\sigma)} = \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\sigma} \log f(x\mid\sigma) \right)^2\right]} = \sqrt{\operatorname{E}\!\left[ \left( \frac{(x - \mu)^2-\sigma^2}{\sigma^3} \right)^2 \right]} \\ & = \sqrt{\int_{-\infty}^{+\infty} f(x\mid\sigma)\left(\frac{(x-\mu)^2-\sigma^2}{\sigma^3}\right)^2 dx} = \sqrt{\frac{2}{\sigma^2}} \propto \frac{1}{\sigma}. \end{align} }[/math]
Equivalently, the Jeffreys prior for [math]\displaystyle{ \log \sigma = \int d\sigma/\sigma }[/math] is the unnormalized uniform distribution on the real line, and thus this distribution is also known as the logarithmic prior. Similarly, the Jeffreys prior for [math]\displaystyle{ \log \sigma^2 = 2 \log \sigma }[/math] is also uniform. It is the unique (up to a multiple) prior (on the positive reals) that is scale-invariant (the Haar measure with respect to multiplication of positive reals), corresponding to the standard deviation being a measure of scale and scale-invariance corresponding to no information about scale. As with the uniform distribution on the reals, it is an improper prior.
Poisson distribution with rate parameter
For the Poisson distribution of the non-negative integer [math]\displaystyle{ n }[/math],
- [math]\displaystyle{ f(n \mid \lambda) = e^{-\lambda}\frac{\lambda^n}{n!}, }[/math]
the Jeffreys prior for the rate parameter [math]\displaystyle{ \lambda \ge 0 }[/math] is
- [math]\displaystyle{ \begin{align}p(\lambda) &\propto \sqrt{I(\lambda)} = \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\lambda} \log f(n\mid\lambda) \right)^2\right]} = \sqrt{\operatorname{E}\!\left[ \left( \frac{n-\lambda}{\lambda} \right)^2\right]} \\ & = \sqrt{\sum_{n=0}^{+\infty} f(n\mid\lambda) \left( \frac{n-\lambda}{\lambda} \right)^2} = \sqrt{\frac{1}{\lambda}}.\end{align} }[/math]
Equivalently, the Jeffreys prior for [math]\displaystyle{ \sqrt\lambda = \int d\lambda/\sqrt\lambda }[/math] is the unnormalized uniform distribution on the non-negative real line.
Bernoulli trial
For a coin that is "heads" with probability [math]\displaystyle{ \gamma \in [0,1] }[/math] and is "tails" with probability [math]\displaystyle{ 1 - \gamma }[/math], for a given [math]\displaystyle{ (H,T) \in \{(0,1), (1,0)\} }[/math] the probability is [math]\displaystyle{ \gamma^H (1-\gamma)^T }[/math]. The Jeffreys prior for the parameter [math]\displaystyle{ \gamma }[/math] is
- [math]\displaystyle{ \begin{align}p(\gamma) & \propto \sqrt{I(\gamma)} = \sqrt{\operatorname{E}\!\left[ \left( \frac{d}{d\gamma} \log f(x\mid\gamma) \right)^2\right]} = \sqrt{\operatorname{E}\!\left[ \left( \frac{H}{\gamma} - \frac{T}{1-\gamma}\right)^2 \right]} \\ & = \sqrt{\gamma \left( \frac{1}{\gamma} - \frac{0}{1-\gamma}\right)^2 + (1-\gamma)\left( \frac{0}{\gamma} - \frac{1}{1-\gamma}\right)^2} = \frac{1}{\sqrt{\gamma(1-\gamma)}}\,.\end{align} }[/math]
This is the arcsine distribution and is a beta distribution with [math]\displaystyle{ \alpha = \beta = 1/2 }[/math]. Furthermore, if [math]\displaystyle{ \gamma = \sin^2(\theta) }[/math] then
- [math]\displaystyle{ \Pr[\theta] = \Pr[\gamma] \frac{d\gamma}{d\theta} \propto \frac{1}{\sqrt{(\sin^2 \theta) (1 - \sin^2 \theta)}} ~2 \sin \theta \cos \theta =2\,. }[/math]
That is, the Jeffreys prior for [math]\displaystyle{ \theta }[/math] is uniform in the interval [math]\displaystyle{ [0, \pi / 2] }[/math]. Equivalently, [math]\displaystyle{ \theta }[/math] is uniform on the whole circle [math]\displaystyle{ [0, 2 \pi] }[/math].
N-sided die with biased probabilities
Similarly, for a throw of an [math]\displaystyle{ N }[/math]-sided die with outcome probabilities [math]\displaystyle{ \vec{\gamma} = (\gamma_1, \ldots, \gamma_N) }[/math], each non-negative and satisfying [math]\displaystyle{ \sum_{i=1}^N \gamma_i = 1 }[/math], the Jeffreys prior for [math]\displaystyle{ \vec{\gamma} }[/math] is the Dirichlet distribution with all (alpha) parameters set to one half. This amounts to using a pseudocount of one half for each possible outcome.
Equivalently, if we write [math]\displaystyle{ \gamma_i = \varphi_i^2 }[/math] for each [math]\displaystyle{ i }[/math], then the Jeffreys prior for [math]\displaystyle{ \vec{\varphi} }[/math] is uniform on the (N − 1)-dimensional unit sphere (i.e., it is uniform on the surface of an N-dimensional unit ball).
Generalizations
Probability-matching prior
In 1963, Welch and Peers showed that for a scalar parameter θ the Jeffreys prior is "probability-matching" in the sense that posterior predictive probabilities agree with frequentist probabilities and credible intervals of a chosen width coincide with frequentist confidence intervals.[6] In a follow-up, Peers showed that this was not true for the multi-parameter case,[7] instead leading to the notion of probability-matching priors with are only implicitly defined as the probability distribution solving a certain partial differential equation involving the Fisher information.[8]
α-parallel prior
Using tools from information geometry, the Jeffreys prior can be generalized in pursuit of obtaining priors that encode geometric information of the statistical model, so as to be invariant under a change of the coordinate of parameters.[9] A special case, the so-called Weyl prior, is defined as a volume form on a Weyl manifold.[10]
References
- ↑ "An invariant form for the prior probability in estimation problems". Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 186 (1007): 453–461. 1946. doi:10.1098/rspa.1946.0056. PMID 20998741. Bibcode: 1946RSPSA.186..453J.
- ↑ "Prior probabilities.". IEEE Transactions on Systems Science and Cybernetics 4 (3): 227–241. September 1968. doi:10.1109/TSSC.1968.300117. http://bayes.wustl.edu/etj/articles/prior.pdf.
- ↑ Firth, David (1992). "Bias reduction, the Jeffreys prior and GLIM". in Fahrmeir, Ludwig; Francis, Brian; Gilchrist, Robert et al.. Advances in GLIM and Statistical Modelling. New York: Springer. pp. 91–100. doi:10.1007/978-1-4612-2952-0_15. ISBN 0-387-97873-9.
- ↑ Magis, David (2015). "A Note on Weighted Likelihood and Jeffreys Modal Estimation of Proficiency Levels in Polytomous Item Response Models". Psychometrika 80: 200–204. doi:10.1007/s11336-013-9378-5.
- ↑ "Harold Jeffreys's Theory of Probability Revisited". Statistical Science 24 (2). 2009. doi:10.1214/09-STS284.
- ↑ Welch, B. L.; Peers, H. W. (1963). "On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods". Journal of the Royal Statistical Society. Series B (Methodological) 25 (2): 318–329. doi:10.1111/j.2517-6161.1963.tb00512.x.
- ↑ Peers, H. W. (1965). "On Confidence Points and Bayesian Probability Points in the Case of Several Parameters". Journal of the Royal Statistical Society. Series B (Methodological) 27 (1): 9–16. doi:10.1111/j.2517-6161.1965.tb00581.x.
- ↑ Scricciolo, Catia (1999). "Probability matching priors: a review". Journal of the Italian Statistical Society 8: 83. doi:10.1007/BF03178943.
- ↑ Takeuchi, J.; Amari, S. (2005). "α-parallel prior and its properties". IEEE Transactions on Information Theory 51 (3): 1011–1023. doi:10.1109/TIT.2004.842703.
- ↑ Jiang, Ruichao; Tavakoli, Javad; Zhao, Yiqiang (2020). "Weyl Prior and Bayesian Statistics". Entropy 22 (4): 467. doi:10.3390/e22040467.
Further reading
- "The Selection of Prior Distributions by Formal Rules". Journal of the American Statistical Association 91 (435): 1343–1370. 1996. doi:10.1080/01621459.1996.10477003.
- Lee, Peter M. (2012). "Jeffreys’ rule". Bayesian Statistics: An Introduction (4th ed.). Wiley. pp. 96–102. ISBN 978-1-118-33257-3.
Original source: https://en.wikipedia.org/wiki/Jeffreys prior.
Read more |