Expectation propagation

From HandWiki
Short description: Method to approximate a probability distribution

Expectation propagation (EP) is a technique in Bayesian machine learning.[1]

EP finds approximations to a probability distribution.[1] It uses an iterative approach that uses the factorization structure of the target distribution.[1] It differs from other Bayesian approximation approaches such as variational Bayesian methods.[1]

More specifically, suppose we wish to approximate an intractable probability distribution [math]\displaystyle{ p(\mathbf{x}) }[/math] with a tractable distribution [math]\displaystyle{ q(\mathbf{x}) }[/math]. Expectation propagation achieves this approximation by minimizing the Kullback-Leibler divergence [math]\displaystyle{ \mathrm{KL}(p||q) }[/math].[1] Variational Bayesian methods minimize [math]\displaystyle{ \mathrm{KL}(q||p) }[/math] instead.[1]

If [math]\displaystyle{ q(\mathbf{x}) }[/math] is a Gaussian [math]\displaystyle{ \mathcal{N}(\mathbf{x}|\mu, \Sigma) }[/math], then [math]\displaystyle{ \mathrm{KL}(p||q) }[/math] is minimized with [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \Sigma }[/math] being equal to the mean of [math]\displaystyle{ p(\mathbf{x}) }[/math] and the covariance of [math]\displaystyle{ p(\mathbf{x}) }[/math], respectively; this is called moment matching.[1]

Applications

Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.

References

  1. Jump up to: 1.0 1.1 1.2 1.3 1.4 1.5 1.6 Bishop, Christopher (2007). Pattern Recognition and Machine Learning. New York: Springer-Verlag New York Inc.. ISBN 978-0387310732. 

External links