EM algorithm and GMM model

From HandWiki

In statistics, EM (expectation maximization) algorithm handles latent variables, while GMM is the Gaussian mixture model.

Background

In the picture below, are shown the red blood cell hemoglobin concentration and the red blood cell volume data of two groups of people, the Anemia group and the Control Group (i.e. the group of people without Anemia). As expected, people with Anemia have lower red blood cell volume and lower red blood cell hemoglobin concentration than those without Anemia.

GMM model with labels

[math]\displaystyle{ x }[/math] is a random vector such as [math]\displaystyle{ x:=\big(\text{red blood cell volume}, \text{red blood cell hemoglobin concentration}\big) }[/math], and from medical studies[citation needed] it is known that [math]\displaystyle{ x }[/math] are normally distributed in each group, i.e. [math]\displaystyle{ x \sim \mathcal N(\mu, \Sigma) }[/math].

[math]\displaystyle{ z }[/math] is denoted as the group where [math]\displaystyle{ x }[/math] belongs, with [math]\displaystyle{ z_i = 0 }[/math] when [math]\displaystyle{ x_i }[/math] belongs to Anemia Group and [math]\displaystyle{ z_i=1 }[/math] when [math]\displaystyle{ x_i }[/math] belongs to Control Group. Also [math]\displaystyle{ z \sim \operatorname{Categorical}(k, \phi) }[/math] where [math]\displaystyle{ k=2 }[/math], [math]\displaystyle{ \phi_j \geq 0, }[/math] and [math]\displaystyle{ \sum_{j=1}^k\phi_j=1 }[/math]. See Categorical distribution.

The following procedure can be used to estimate [math]\displaystyle{ \phi, \mu , \Sigma }[/math].

A maximum likelihood estimation can be applied:

[math]\displaystyle{ \ell(\phi,\mu,\Sigma)=\sum_{i=1}^m \log (p(x^{(i)};\phi,\mu,\Sigma)) =\sum_{i=1}^m \log \sum_{z^{(i)}=1}^k p\left(x^{(i)} \mid z^{(i)} ; \mu, \Sigma\right) p(z^{(i)} ; \phi) }[/math]

As the [math]\displaystyle{ z_i }[/math] for each [math]\displaystyle{ x_i }[/math] are known, the log likelihood function can be simplified as below:

[math]\displaystyle{ \ell(\phi, \mu, \Sigma)=\sum_{i=1}^{m} \log p\left(x^{(i)} \mid z^{(i)} ; \mu, \Sigma\right)+\log p\left(z^{(i)} ; \phi\right) }[/math]

Now the likelihood function can be maximized by making partial derivative over [math]\displaystyle{ \mu, \Sigma, \phi }[/math], obtaining:

[math]\displaystyle{ \phi_{j} =\frac{1}{m} \sum_{i=1}^m 1\{z^{(i)}=j\} }[/math]
[math]\displaystyle{ \mu_j =\frac{\sum_{i=1}^m 1\{z^{(i)}=j\} x^{(i)}}{\sum_{i=1}^{m} 1\left\{z^{(i)}=j\right\}} }[/math]
[math]\displaystyle{ \Sigma_j =\frac{\sum_{i=1}^m 1\{z^{(i)}=j\} (x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T}{\sum_{i=1}^m 1\{z^{(i)}=j\}} }[/math][1]

If [math]\displaystyle{ z_i }[/math] is known, the estimation of the parameters results to be quite simple with maximum likelihood estimation. But if [math]\displaystyle{ z_i }[/math] is unknown it is much more complicated.[2]

GMM without labels

Being [math]\displaystyle{ z }[/math] a latent variable (i.e. not observed), with unlabeled scenario, the Expectation Maximization Algorithm is needed to estimate [math]\displaystyle{ z }[/math] as well as other parameters. Generally, this problem is set as a GMM since the data in each group is normally distributed. [3][circular reference]

In machine learning, the latent variable [math]\displaystyle{ z }[/math] is considered as a latent pattern lying under the data, which the observer is not able to see very directly. [math]\displaystyle{ x_i }[/math] is the known data, while [math]\displaystyle{ \phi, \mu, \Sigma }[/math] are the parameter of the model. With the EM algorithm, some underlying pattern [math]\displaystyle{ z }[/math] in the data [math]\displaystyle{ x_i }[/math] can be found, along with the estimation of the parameters. The wide application of this circumstance in machine learning is what makes EM algorithm so important.[4]

EM algorithm in GMM

The EM algorithm consists of two steps: the E-step and the M-step. Firstly, the model parameters and the [math]\displaystyle{ z^{(i)} }[/math] can be randomly initialized. In the E-step, the algorithm tries to guess the value of [math]\displaystyle{ z^{(i)} }[/math] based on the parameters, while in the M-step, the algorithm updates the value of the model parameters based on the guess of [math]\displaystyle{ z^{(i)} }[/math]of the E-step. These two steps are repeated until convergence is reached.

The algorithm in GMM is:

Repeat until convergence:

   1. (E-step) For each [math]\displaystyle{ i, j }[/math], set
   [math]\displaystyle{ w_{j}^{(i)}:=p\left(z^{(i)}=j | x^{(i)} ; \phi, \mu, \Sigma\right) }[/math]
   2. (M-step) Update the parameters
   [math]\displaystyle{ \phi_{j} :=\frac{1}{m} \sum_{i=1}^{m} w_{j}^{(i)} }[/math]
      [math]\displaystyle{ \mu_{j} :=\frac{\sum_{i=1}^{m} w_{j}^{(i)} x^{(i)}}{\sum_{i=1}^{m} w_{j}^{(i)}} }[/math]
      [math]\displaystyle{ \Sigma_{j} :=\frac{\sum_{i=1}^{m} w_{j}^{(i)}\left(x^{(i)}-\mu_{j}\right)\left(x^{(i)}-\mu_{j}\right)^{T}}{\sum_{i=1}^{m} w_{j}^{(i)}} }[/math]

[1]

With Bayes Rule, the following result is obtained by the E-step:

[math]\displaystyle{ p\left(z^{(i)}=j | x^{(i)} ; \phi, \mu, \Sigma\right)=\frac{p\left(x^{(i)} | z^{(i)}=j ; \mu, \Sigma\right) p\left(z^{(i)}=j ; \phi\right)}{\sum_{l=1}^{k} p\left(x^{(i)} | z^{(i)}=l ; \mu, \Sigma\right) p\left(z^{(i)}=l ; \phi\right)} }[/math]

According to GMM setting, these following formulas are obtained:

[math]\displaystyle{ p\left(x^{(i)} | z^{(i)}=j ; \mu, \Sigma\right)=\frac{1}{(2 \pi)^{n / 2}\left|\Sigma_{j}\right|^{1 / 2}} \exp \left(-\frac{1}{2}\left(x^{(i)}-\mu_{j}\right)^{T} \Sigma_{j}^{-1}\left(x^{(i)}-\mu_{j}\right)\right) }[/math]

[math]\displaystyle{ p\left(z^{(i)}=j ; \phi\right)=\phi_j }[/math]

In this way, a switch between the E-step and the M-step is possible, according to the randomly initialized parameters.

References