Posterior predictive distribution

From HandWiki
Revision as of 15:21, 6 February 2024 by Sherlock (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Distribution of new data marginalized over the posterior

In Bayesian statistics, the posterior predictive distribution is the distribution of possible unobserved values conditional on the observed values.[1][2]

Given a set of N i.i.d. observations [math]\displaystyle{ \mathbf{X} = \{x_1, \dots, x_N\} }[/math], a new value [math]\displaystyle{ \tilde{x} }[/math] will be drawn from a distribution that depends on a parameter [math]\displaystyle{ \theta \in \Theta }[/math], where [math]\displaystyle{ \Theta }[/math] is the parameter space.

[math]\displaystyle{ p(\tilde{x}|\theta) }[/math]

It may seem tempting to plug in a single best estimate [math]\displaystyle{ \hat{\theta} }[/math] for [math]\displaystyle{ \theta }[/math], but this ignores uncertainty about [math]\displaystyle{ \theta }[/math], and because a source of uncertainty is ignored, the predictive distribution will be too narrow. Put another way, predictions of extreme values of [math]\displaystyle{ \tilde{x} }[/math] will have a lower probability than if the uncertainty in the parameters as given by their posterior distribution is accounted for.

A posterior predictive distribution accounts for uncertainty about [math]\displaystyle{ \theta }[/math]. The posterior distribution of possible [math]\displaystyle{ \theta }[/math] values depends on [math]\displaystyle{ \mathbf{X} }[/math]:

[math]\displaystyle{ p(\theta|\mathbf{X}) }[/math]

And the posterior predictive distribution of [math]\displaystyle{ \tilde{x} }[/math] given [math]\displaystyle{ \mathbf{X} }[/math] is calculated by marginalizing the distribution of [math]\displaystyle{ \tilde{x} }[/math] given [math]\displaystyle{ \theta }[/math] over the posterior distribution of [math]\displaystyle{ \theta }[/math] given [math]\displaystyle{ \mathbf{X} }[/math]:

[math]\displaystyle{ p(\tilde{x}|\mathbf{X}) = \int_{\Theta} p(\tilde{x}|\theta) \, p(\theta|\mathbf{X}) \operatorname{d}\!\theta }[/math]

Because it accounts for uncertainty about [math]\displaystyle{ \theta }[/math], the posterior predictive distribution will in general be wider than a predictive distribution which plugs in a single best estimate for [math]\displaystyle{ \theta }[/math].

Prior vs. posterior predictive distribution

The prior predictive distribution, in a Bayesian context, is the distribution of a data point marginalized over its prior distribution [math]\displaystyle{ G }[/math]. That is, if [math]\displaystyle{ \tilde{x} \sim F(\tilde{x}|\theta) }[/math] and [math]\displaystyle{ \theta \sim G(\theta|\alpha) }[/math], then the prior predictive distribution is the corresponding distribution [math]\displaystyle{ H(\tilde{x}|\alpha) }[/math], where

[math]\displaystyle{ p_H(\tilde{x}|\alpha) = \int_{\theta} p_F(\tilde{x}|\theta) \, p_G(\theta|\alpha) \operatorname{d}\!\theta }[/math]

This is similar to the posterior predictive distribution except that the marginalization (or equivalently, expectation) is taken with respect to the prior distribution instead of the posterior distribution.

Furthermore, if the prior distribution [math]\displaystyle{ G(\theta|\alpha) }[/math] is a conjugate prior, then the posterior predictive distribution will belong to the same family of distributions as the prior predictive distribution. This is easy to see. If the prior distribution [math]\displaystyle{ G(\theta|\alpha) }[/math] is conjugate, then

[math]\displaystyle{ p(\theta|\mathbf{X},\alpha) = p_G(\theta|\alpha'), }[/math]

i.e. the posterior distribution also belongs to [math]\displaystyle{ G(\theta|\alpha), }[/math] but simply with a different parameter [math]\displaystyle{ \alpha' }[/math] instead of the original parameter [math]\displaystyle{ \alpha . }[/math] Then,

[math]\displaystyle{ \begin{align} p(\tilde{x}|\mathbf{X},\alpha) & = \int_{\theta} p_F(\tilde{x}|\theta) \, p(\theta|\mathbf{X},\alpha) \operatorname{d}\!\theta \\ & = \int_{\theta} p_F(\tilde{x}|\theta) \, p_G(\theta|\alpha') \operatorname{d}\!\theta \\ & = p_H(\tilde{x}|\alpha') \end{align} }[/math]

Hence, the posterior predictive distribution follows the same distribution H as the prior predictive distribution, but with the posterior values of the hyperparameters substituted for the prior ones.

The prior predictive distribution is in the form of a compound distribution, and in fact is often used to define a compound distribution, because of the lack of any complicating factors such as the dependence on the data [math]\displaystyle{ \mathbf{X} }[/math] and the issue of conjugacy. For example, the Student's t-distribution can be defined as the prior predictive distribution of a normal distribution with known mean μ but unknown variance σx2, with a conjugate prior scaled-inverse-chi-squared distribution placed on σx2, with hyperparameters ν and σ2. The resulting compound distribution [math]\displaystyle{ t(x|\mu,\nu,\sigma^2) }[/math] is indeed a non-standardized Student's t-distribution, and follows one of the two most common parameterizations of this distribution. Then, the corresponding posterior predictive distribution would again be Student's t, with the updated hyperparameters [math]\displaystyle{ \nu', {\sigma^2}' }[/math] that appear in the posterior distribution also directly appearing in the posterior predictive distribution.

In some cases the appropriate compound distribution is defined using a different parameterization than the one that would be most natural for the predictive distributions in the current problem at hand. Often this results because the prior distribution used to define the compound distribution is different from the one used in the current problem. For example, as indicated above, the Student's t-distribution was defined in terms of a scaled-inverse-chi-squared distribution placed on the variance. However, it is more common to use an inverse gamma distribution as the conjugate prior in this situation. The two are in fact equivalent except for parameterization; hence, the Student's t-distribution can still be used for either predictive distribution, but the hyperparameters must be reparameterized before being plugged in.

In exponential families

Most, but not all, common families of distributions are exponential families. Exponential families have a large number of useful properties. One of these is that all members have conjugate prior distributions — whereas very few other distributions have conjugate priors.

Prior predictive distribution in exponential families

Another useful property is that the probability density function of the compound distribution corresponding to the prior predictive distribution of an exponential family distribution marginalized over its conjugate prior distribution can be determined analytically. Assume that [math]\displaystyle{ F(x|\boldsymbol{\theta}) }[/math] is a member of the exponential family with parameter [math]\displaystyle{ \boldsymbol{\theta} }[/math] that is parametrized according to the natural parameter [math]\displaystyle{ \boldsymbol{\eta} = \boldsymbol{\eta}(\boldsymbol{\theta}) }[/math], and is distributed as

[math]\displaystyle{ p_F(x|\boldsymbol{\eta}) = h(x)g(\boldsymbol{\eta})e^{\boldsymbol{\eta}^{\rm T}\mathbf{T}(x)} }[/math]

while [math]\displaystyle{ G(\boldsymbol{\eta}|\boldsymbol{\chi},\nu) }[/math] is the appropriate conjugate prior, distributed as

[math]\displaystyle{ p_G(\boldsymbol{\eta}|\boldsymbol{\chi},\nu) = f(\boldsymbol{\chi},\nu)g(\boldsymbol{\eta})^\nu e^{\boldsymbol{\eta}^{\rm T}\boldsymbol{\chi}} }[/math]

Then the prior predictive distribution [math]\displaystyle{ H }[/math] (the result of compounding [math]\displaystyle{ F }[/math] with [math]\displaystyle{ G }[/math]) is

[math]\displaystyle{ \begin{align} p_H(x|\boldsymbol{\chi},\nu) &= {\displaystyle \int\limits_\boldsymbol{\eta} p_F(x|\boldsymbol{\eta}) p_G(\boldsymbol{\eta}|\boldsymbol{\chi},\nu) \,\operatorname{d}\boldsymbol{\eta}} \\ &= {\displaystyle \int\limits_\boldsymbol{\eta} h(x)g(\boldsymbol{\eta})e^{\boldsymbol{\eta}^{\rm T}\mathbf{T}(x)} f(\boldsymbol{\chi},\nu)g(\boldsymbol{\eta})^\nu e^{\boldsymbol{\eta}^{\rm T}\boldsymbol{\chi}} \,\operatorname{d}\boldsymbol{\eta}} \\ &= {\displaystyle h(x) f(\boldsymbol{\chi},\nu) \int\limits_\boldsymbol{\eta} g(\boldsymbol{\eta})^{\nu+1} e^{\boldsymbol{\eta}^{\rm T}(\boldsymbol{\chi} + \mathbf{T}(x))} \,\operatorname{d}\boldsymbol{\eta}} \\ &= h(x) \dfrac{f(\boldsymbol{\chi},\nu)}{f(\boldsymbol{\chi} + \mathbf{T}(x), \nu+1)} \end{align} }[/math]

The last line follows from the previous one by recognizing that the function inside the integral is the density function of a random variable distributed as [math]\displaystyle{ G(\boldsymbol{\eta}| \boldsymbol{\chi} + \mathbf{T}(x), \nu+1) }[/math], excluding the normalizing function [math]\displaystyle{ f(\dots)\, }[/math]. Hence the result of the integration will be the reciprocal of the normalizing function.

The above result is independent of choice of parametrization of [math]\displaystyle{ \boldsymbol{\theta} }[/math], as none of [math]\displaystyle{ \boldsymbol{\theta} }[/math], [math]\displaystyle{ \boldsymbol{\eta} }[/math] and [math]\displaystyle{ g(\dots)\, }[/math] appears. ([math]\displaystyle{ g(\dots)\, }[/math] is a function of the parameter and hence will assume different forms depending on choice of parametrization.) For standard choices of [math]\displaystyle{ F }[/math] and [math]\displaystyle{ G }[/math], it is often easier to work directly with the usual parameters rather than rewrite in terms of the natural parameters.

The reason the integral is tractable is that it involves computing the normalization constant of a density defined by the product of a prior distribution and a likelihood. When the two are conjugate, the product is a posterior distribution, and by assumption, the normalization constant of this distribution is known. As shown above, the density function of the compound distribution follows a particular form, consisting of the product of the function [math]\displaystyle{ h(x) }[/math] that forms part of the density function for [math]\displaystyle{ F }[/math], with the quotient of two forms of the normalization "constant" for [math]\displaystyle{ G }[/math], one derived from a prior distribution and the other from a posterior distribution. The beta-binomial distribution is a good example of how this process works.

Despite the analytical tractability of such distributions, they are in themselves usually not members of the exponential family. For example, the three-parameter Student's t distribution, beta-binomial distribution and Dirichlet-multinomial distribution are all predictive distributions of exponential-family distributions (the normal distribution, binomial distribution and multinomial distributions, respectively), but none are members of the exponential family. This can be seen above due to the presence of functional dependence on [math]\displaystyle{ \boldsymbol{\chi} + \mathbf{T}(x) }[/math]. In an exponential-family distribution, it must be possible to separate the entire density function into multiplicative factors of three types: (1) factors containing only variables, (2) factors containing only parameters, and (3) factors whose logarithm factorizes between variables and parameters. The presence of [math]\displaystyle{ \boldsymbol{\chi} + \mathbf{T}(x){\chi} }[/math] makes this impossible unless the "normalizing" function [math]\displaystyle{ f(\dots)\, }[/math]either ignores the corresponding argument entirely or uses it only in the exponent of an expression.

Posterior predictive distribution in exponential families

When a conjugate prior is being used, the posterior predictive distribution belongs to the same family as the prior predictive distribution, and is determined simply by plugging the updated hyperparameters for the posterior distribution of the parameter(s) into the formula for the prior predictive distribution. Using the general form of the posterior update equations for exponential-family distributions (see the appropriate section in the exponential family article), we can write out an explicit formula for the posterior predictive distribution:

[math]\displaystyle{ \begin{array}{lcl} p(\tilde{x}|\mathbf{X},\boldsymbol{\chi},\nu) &=& p_H\left(\tilde{x}|\boldsymbol{\chi} + \mathbf{T}( \mathbf{X}), \nu+N\right) \end{array} }[/math]

where

[math]\displaystyle{ \mathbf{T}(\mathbf{X}) = \sum_{i=1}^N \mathbf{T}(x_i) }[/math]

This shows that the posterior predictive distribution of a series of observations, in the case where the observations follow an exponential family with the appropriate conjugate prior, has the same probability density as the compound distribution, with parameters as specified above. The observations themselves enter only in the form [math]\displaystyle{ \mathbf{T}(\mathbf{X}) = \sum_{i=1}^N \mathbf{T}(x_i) . }[/math]

This is termed the sufficient statistic of the observations, because it tells us everything we need to know about the observations in order to compute a posterior or posterior predictive distribution based on them (or, for that matter, anything else based on the likelihood of the observations, such as the marginal likelihood).

Joint predictive distribution, marginal likelihood

It is also possible to consider the result of compounding a joint distribution over a fixed number of independent identically distributed samples with a prior distribution over a shared parameter. In a Bayesian setting, this comes up in various contexts: computing the prior or posterior predictive distribution of multiple new observations, and computing the marginal likelihood of observed data (the denominator in Bayes' law). When the distribution of the samples is from the exponential family and the prior distribution is conjugate, the resulting compound distribution will be tractable and follow a similar form to the expression above. It is easy to show, in fact, that the joint compound distribution of a set [math]\displaystyle{ \mathbf{X} = \{x_1, \dots, x_N\} }[/math] for [math]\displaystyle{ N }[/math] observations is

[math]\displaystyle{ p_H(\mathbf{X}|\boldsymbol{\chi},\nu) = \left( \prod_{i=1}^N h(x_i) \right) \dfrac{f(\boldsymbol{\chi},\nu)}{f\left(\boldsymbol{\chi} + \mathbf{T}(\mathbf{X}), \nu+N \right)} }[/math]

This result and the above result for a single compound distribution extend trivially to the case of a distribution over a vector-valued observation, such as a multivariate Gaussian distribution.

Relation to Gibbs sampling

Collapsing out a node in a collapsed Gibbs sampler is equivalent to compounding. As a result, when a set of independent identically distributed (i.i.d.) nodes all depend on the same prior node, and that node is collapsed out, the resulting conditional probability of one node given the others as well as the parents of the collapsed-out node (but not conditioning on any other nodes, e.g. any child nodes) is the same as the posterior predictive distribution of all the remaining i.i.d. nodes (or more correctly, formerly i.i.d. nodes, since collapsing introduces dependencies among the nodes). That is, it is generally possible to implement collapsing out of a node simply by attaching all parents of the node directly to all children, and replacing the former conditional probability distribution associated with each child with the corresponding posterior predictive distribution for the child conditioned on its parents and the other formerly i.i.d. nodes that were also children of the removed node. For an example, for more specific discussion and for some cautions about certain tricky issues, see the Dirichlet-multinomial distribution article.

See also

References

  1. "Posterior Predictive Distribution". SAS. http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_mcmc_sect034.htm. Retrieved 19 July 2014. 
  2. Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013). Bayesian Data Analysis (Third ed.). Chapman and Hall/CRC. p. 7. ISBN 978-1-4398-4095-5. 

Further reading

  • Ntzoufras, Ioannis (2009). "The Predictive Distribution and Model Checking". Bayesian Modeling Using WinBUGS. Wiley. ISBN 978-0-470-14114-4.