Topic Modeling: Latent Dirichlet Allocation

Topic Modeling

Topic modeling is a form of unsupervised learning that seeks to categorize documents by topic. Typically the topics are considered to be latent variables and as such are to be discovered by the algorithm. We will look at one of the most common topic modeling algorithms: Latent Dirichlet Analysis (LDA).

LDA (Latent Dirichlet Analysis)

LDA categorizes documents by topic via a generative probabilistic model. The idea is that words are generated from topics, and each document has a particular probability of using particular topics to generate a given word. We seek to find which topics given documents are likely to use to generate words. To explain this model, we introduce the following definitions:

- The number of documents

- The number of topics

- The number of unique Words

- The number of words in the document

- The document,

- The topic,

- The topic label of the word in the document, ,

- The word in the document, ,

The LDA model has six components:

  1. Document Prior: A Dirichlet distribution consisting of parameters, all equal to a single real hyper-parameter, .
  2. Document Distributions: A categorical distribution for each document, whose parameters give the probabilities that a particular topic will be used to generate words in that document.
  3. Topic Prior: A Dirichlet distribution consisting of parameters, all equal to a single real hyper-parameter, .
  4. Topic Distributions: A categorical distribution for each topic, whose parameters give the probabilities that a particular word will be generated from the given topic.
  5. Word-Topic Label: A nominal value for each word in each document specifying the topic ‘used’ by a given document to generate a particular word.
  6. Word: A nominal value representing the unique words of each document.

Of these, the words are known and the and hyper-parameters are specified, as is the number of topics. What is desired is to estimate the values of the parameters in the topic and document distributions. The first tells us what each topic is about, and the second tells us what topics are used in each document.

Typically, the and hyper-parameters are small (less than 1, but greater than 0). The effect of this is to make each topic give high probability values to a small subset of words, and each document give high probability values to a small number of topics. See the section on Dirichlet distributions for a deeper discussion of how this works.

Metropolis within Gibbs MCMC

Once again, the are a number of ways to attempt to estimate the most probable values of the parameters of interest. Particularly popular are Markov Chain Monte Carlo (MCMC) methods. MCMC methods are sampling techniques that generate new samples from a Markov Chain whose stationary distribution is the distribution of interest. Given a number of conditions, as the number of samples approaches infinity, the empirical distribution is guaranteed to approach the distribution of interest with probability of 1.

Concretely, we will examine the conceptually simplest of these: the Metropolis within Gibbs algorithm. This algorithm makes use of candidate functions that generate a new value for an unknown variable in the model, given the currently assigned value. These must be symmetric (though note that the Metropolis-Hastings variant of the algorithm works with non-symmetric candidate functions), such that the probability of such a function generating value Y given the current value is X must be the same as the probability of the function generating X given the current value is Y. The candidate functions must also be such that a sequence of applications of them must permit movement from any given value to any other value of the variable within a finite number of steps.

Given such candidate functions, the basic Metropolis in Gibbs algorithm is:

  1. Assign all unknown variables in the model a random initial value
  2. Repeat until the desired number of samples are generated:
  • For each unknown variable in the model:
  • A. Calculate the probability of the model given currently assigned values. Let this value be .
  • B. Use the candidate function associated with the variable to generate a new candidate value for that variable.
  • C. Calculate the probability of the model given this variable is assigned the candidate value and all other variables remain unchanged. Let this value be .
  • D. Assign the variable the candidate value with probability

Since the initial state is assigned randomly, the first samples are discarded. This is known as the and is a hyper-parameter chosen by the data-scientist. The remaining samples are then used to estimate the conditional probability distribution of the unknown variables in the model given the known variables. There are methods for checking that sufficient samples have probably been collected, the most common being to run multiple instances of the algorithm and continue sampling until all converge. There are also various ways to improve the algorithm beyond this basic form. Interested students are invited to read the literature.

We will look at what this means in practice in the LDA case with a simple example. Let us assume we have two documents:

  • Document 1: “hello world”

  • Document 2: “farewell cruel world”

Let us stipulate that we want two topics, and , and that our hyper-parameters are and .

Noting that we will be treating parameters of the probability distributions as variables, in addition to the document words, the known variables of the model are:

  • Document Prior:

  • Topic Prior:

  • Unique Words: $$ { hello , world , farewell, cruel }

The unknown variables are:

  • Document 1 Distribution:

  • Document 2 Distribution:

  • Topic 1 Distribution:

  • Topic 2 Distribution:

  • Word Topic Labels:

Note that the distributions have one fewer variables than values, since they must sum to one.

To begin we assign all unknown variables random values:

  • Document 1 Distribution:

  • Document 2 Distribution:

  • Topic 1 Distribution:

  • Topic 2 Distribution:

  • Word Topic Labels:

To generate the first sample, we must iterate through all unknown variables, generate a candidate value for them, compare probabilities of the model with the currently assigned value versus the candidate value, and possible assign the candidate value according to the acceptance equation in the algorithm. We look at only the first variable: . Since all the probability for all inknown variables except for and is conditionally independent of given and , we only need to examine the effects of the value of on the probability of these two variables taking their assigned values.

First we calculate . The value currently assigned to is . Since it is drawn from the distribution we have (rounding numbers to two decimal places):

(Or, if you like, )

We also have:

|

|

Multiplying them we get: .

Let us assume our candidate function for is |. This means that the candidate function just samples a value from a uniform distribution between 0 and 1, ignoring the current value. We imagine that the candidate function offers the value as the candidate. We now use this to calculate using the same process with the new value:

|

|

Multiplying them we get: .

Accordingly, the probability of replacing the current assignment of to with the candidate value is .

We will stop there, but we would continue doing the same thing for all other unknown variables to get our first sample. Then we would repeat the whole thing many times to get all our samples. Eventually we would use these samples (discarding the burn) to estimate the parameters of the document and topic distributions and we would be done!

Share this article:

This article is from the free online course:

Advanced Machine Learning

The Open University