Pseudo-marginal Metropolis–Hastings algorithm

From HandWiki
Short description: Monte Carlo sampling scheme

In computational statistics, the pseudo-marginal Metropolis–Hastings algorithm[1] is a Monte Carlo method to sample from a probability distribution. It is an instance of the popular Metropolis–Hastings algorithm that extends its use to cases where the target density is not available analytically. It relies on the fact that the Metropolis–Hastings algorithm can still sample from the correct target distribution if the target density in the acceptance ratio is replaced by an estimate. It is especially popular in Bayesian statistics, where it is applied if the likelihood function is not tractable (see example below).

Algorithm description

The aim is to simulate from some probability density function [math]\displaystyle{ \pi(\theta) }[/math]. The algorithm follows the same steps as the standard Metropolis–Hastings algorithm except that the evaluation of the target density is replaced by a non-negative and unbiased estimate. For comparison, the main steps of a Metropolis–Hastings algorithm are outlined below.

Metropolis–Hastings algorithm

Given a current state [math]\displaystyle{ \theta_n }[/math] the Metropolis–Hastings algorithm proposes a new state according to some density [math]\displaystyle{ \theta'\sim Q( \cdot \mid \theta_n) }[/math]. The algorithm then sets [math]\displaystyle{ \theta_{n+1} = \theta' }[/math] with probability

[math]\displaystyle{ a(\theta_n, \theta') = \min\left(1,\frac{\pi(\theta')}{\pi(\theta_n)}\frac{Q(\theta_n\mid\theta')}{Q(\theta'\mid\theta_n)}\right) }[/math]

otherwise the old state is kept, that is, [math]\displaystyle{ \theta_{n+1}=\theta_n }[/math].

Pseudo-marginal Metropolis–Hastings algorithm

If the density [math]\displaystyle{ \pi }[/math] is not available analytically the above algorithm cannot be employed. The pseudo-marginal Metropolis–Hastings algorithm in contrast only assumes the existence of an unbiased estimator [math]\displaystyle{ \hat{\pi}_\theta }[/math], i.e. the estimator must satisfy the equation [math]\displaystyle{ \mathbb{E}[\hat{\pi}_\theta] = \pi(\theta). }[/math] Now, given [math]\displaystyle{ \theta_n }[/math] and the respective estimate [math]\displaystyle{ \hat{\pi}_{\theta_n} }[/math] the algorithm proposes a new state according to some density [math]\displaystyle{ \theta'\sim Q( \cdot \mid \theta_n) }[/math]. Next, compute an estimate [math]\displaystyle{ \hat{\pi}_{\theta'} }[/math] and set [math]\displaystyle{ \theta_{n+1} = \theta' }[/math] with probability

[math]\displaystyle{ a(\theta_n, \theta') = \min\left( 1, \frac{\hat{\pi}_{\theta'}}{\hat{\pi}_{\theta_n}} \frac{Q(\theta_n\mid\theta')}{Q(\theta'\mid\theta_n)}\right) }[/math]

otherwise the old state is kept, that is, [math]\displaystyle{ \theta_{n+1}=\theta_n }[/math].

Application to Bayesian statistics

In Bayesian statistics the target of inference is the posterior distribution

[math]\displaystyle{ p(\theta \mid y) = \frac{p_\theta(y)p(\theta)}{p(y)}, }[/math]

where [math]\displaystyle{ p_\theta }[/math] denotes the likelihood function, [math]\displaystyle{ p }[/math] is the prior and [math]\displaystyle{ p(y) }[/math] is the prior predictive distribution. Since there is often no analytic expression of this quantity, one often relies on Monte Carlo methods to sample from the distribution instead. Monte Carlo methods often need the likelihood [math]\displaystyle{ p_\theta(y) }[/math] to be accessible for every parameter value [math]\displaystyle{ \theta }[/math]. In some cases, however, the likelihood does not have an analytic expression. An example of such a case is outlined below.

Example: Latent variable model[1]

Consider a model consisting of i.i.d. latent real-valued random variables [math]\displaystyle{ Z_1,\ldots,Z_n }[/math] with [math]\displaystyle{ Z_i \sim f_\theta(\cdot) }[/math] and suppose one can only observe these variables through some additional noise [math]\displaystyle{ Y_i \mid Z_i = z \sim g_\theta(\cdot\mid z) }[/math] for some conditional density [math]\displaystyle{ g }[/math]. (This could be due to measurement error, for instance.) We are interested in Bayesian analysis of this model based on some observed data [math]\displaystyle{ y_1, \ldots, y_n }[/math]. Therefore, we introduce some prior distribution [math]\displaystyle{ p(\theta) }[/math] on the parameter. In order to compute the posterior distribution

[math]\displaystyle{ p(\theta \mid y_1, \ldots, y_n) \propto p_\theta(y_1, \ldots, y_n) p(\theta) }[/math]

we need to find the likelihood function [math]\displaystyle{ p_\theta(y_1, \ldots, y_n) }[/math]. The likelihood contribution of any observed data point [math]\displaystyle{ y }[/math] is then

[math]\displaystyle{ p_\theta(y) = \int g_\theta(y \mid z)f_\theta(z) \, dz }[/math]

and the joint likelihood of the observed data [math]\displaystyle{ y_1, \ldots, y_n }[/math] is

[math]\displaystyle{ p_\theta(y_1, \ldots, y_n) = \prod_{i=1}^n p_\theta(y_i) = \prod_{i=1}^n \int g_\theta(y_i \mid z_i)f_\theta(z_i) \, dz_i. }[/math]

If the integral on the right-hand side is not analytically available, importance sampling can be used to estimate the likelihood. Introduce an auxiliary distribution [math]\displaystyle{ q }[/math] such that [math]\displaystyle{ g_\theta(y\mid z)f_\theta(z) \gt 0 \Rightarrow q(z) \gt 0 }[/math] for all [math]\displaystyle{ z }[/math] then

[math]\displaystyle{ \hat{p}_\theta(y_i)=\frac{1}{N}\sum_{k=1}^N \frac{g_\theta(y_i \mid Z_k)f_\theta(Z_k)}{q(Z_k)}, \qquad Z_k \overset{i.i.d.}{\sim} q(\cdot) }[/math]

is an unbiased estimator of [math]\displaystyle{ p_\theta(y_i) }[/math] and the joint likelihood can be estimated unbiasedly by

[math]\displaystyle{ \hat{p}_\theta(y_1, \ldots, y_n) = \prod_{i=1}^n \hat{p}_\theta(y_i) = \prod_{i=1}^n \frac{1}{N} \sum_{k=1}^N \frac{g_\theta(y_i \mid Z_{i,k}) f_\theta(Z_{i,k})}{q(Z_{i,k})}, \qquad Z_{i,k} \overset{i.i.d.}{\sim} q(\cdot). }[/math]

Extensions

Pseudo-marginal Metropolis-Hastings can be seen as a special case of so-called particle marginal Metropolis-Hastings algorithms. In the case of the latter, unbiased estimators of densities relating to static parameters in state-space models may be obtained using a particle filter. While the algorithm enables inference on both the joint space of static parameters and latent variables, when interest is only in the static parameters the algorithm is equivalent to a pseudo-marginal algorithm.[2]

References

  1. 1.0 1.1 Andrieu, Christophe; Roberts, Gareth O. (2009). "The pseudo-marginal approach for efficient Monte Carlo computations". Annals of Statistics 37 (2): 697–725. doi:10.1214/07-aos574. 
  2. Andrieu, Christophe; Doucet, Arnaud; Holenstein, Roman (2010). "Particle Markov chain Monte Carlo methods". Journal of the Royal Statistical Society, Series B (Statistical Methodology) 72 (3): 269–342. doi:10.1111/j.1467-9868.2009.00736.x. https://doi.org/10.1111/j.1467-9868.2009.00736.x.