Clustering: Gaussian Mixture Models

Gaussian mixture models assume that the data is generated from a specified number of Gaussian distributions, known as mixtures. The form of data generating distribution for each function is known (they are Gaussian distributions), but their parameters and prior probabilities are unknown. Also unknown are which mixture each individual data vector was generated by.

The aim is to be able to obtain:

  1. Posteriori probabilities regarding which mixture generated each datum (mixture membership).

  2. The prior probabilities of a datum being generated by a particular mixture.

  3. The parameters of the data generating Gaussian distribution associated with each mixture.

Given knowledge of the prior probabilities and parameters of the mixtures, we can immediately calculate the posterior probability regarding which mixture generated each datum. Calculating this is done by Bayes theorem. Where is a random variable over the mixtures, this is:

Equally, given the posterior probabilities regarding mixture membership, we are able to estimate both the parameters of the Gaussian distributions associated with each mixture and the prior probabilities of the mixtures: We just need to weight data items using their posterior probabilities of belonging to different mixtures when we calculate the means, covariance and prior probability for the mixtures. (See the update equations in the maximization step of the EM algorithm below.)

Note that this is identical to a missing data problem: It is as if we had a column in our data specifying the mixture generating the known values for the datum, but all values in this column are missing. Such a variable is known as a latent variable. In this case working out the distributions over this latent variable forms part of the goal, but sometimes latent variables are introduced as intermediaries where working out the distribution over their values for each datum is not part of the end goal.

The result is a soft clustering, where each datum is given a probability of being generated by each mixture.

The Expectation Maximization Algorithm

The most common method to acquire the desired values is the Expectation-Maximization (EM) algorithm. It exploits the relationships explained above to in an iterative process:

  1. Current estimates of the parameters of the Gaussian distributions and prior distribution over the mixtures are used to calculate the posterior probabilities for each datum being generated by each mixture. This is the expectation step.

  2. Current estimates of the posterior probabilities for each datum being generated by each mixture are used to calculate the maximum likelihood estimates of the parameters of the Gaussian distributions and prior distribution over the mixtures. This is the maximization step.

This process is typically initiated by a random assignment of the posterior probabilities for each datum being generated by each mixture leading to the first step being a maximization step. It is continued until convergence.

To give the equations for the updates required in each step, we introduce some symbolism:

the number of clusters/mixtures.
the number of data vectors.
the prior probability of cluster at step in the EM process. This is sometimes called the mixture weights. I.e.
the mean parameter of cluster k at step in the EM process.
the covariance parameter of cluster k at step in the EM process.
the posterior probability that belongs to cluster . I.e. |

The update equations for the maximization step are:

The update equations for the expectation step are:

See the aside below about the ‘;’ symbol in probability notation if you are unsure what it means.

The EM algorithm can coverge to local optima, and the common response to this is to run the algorithm several times from different initial asignments of posterior probabilities for each datum being generated by each mixture. This can be time consuming, and alternative approaches exist, including annealing techniques.


Aside: If you are uncertain what the ‘;’ symbol means when used in probability notation, a quick explanation is provided in the Semi-colon in probability notation document available in the downloads section at the end of this article.


Share this article:

This article is from the free online course:

Advanced Machine Learning

The Open University