Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Clustering: Introduction

In this article, we will introduce common approaches to clustering algorithms: sequential, hierarchical and optimization-based. The sequential and hierarchical approaches do not fit cleanly into the definition of statistical models we gave in the first module of this course. Proximity functions play a central role in all these clustering approaches, and we will begin by defining what these functions are. We will then look at the the two approaches individually, explain their basic concepts and give basic algorithms for each. We will also look briefly at graph-based clustering methods that can be used with these techniques.

The optimization-based approach matches our definition of statistical models, and it is no surprise that the dominant tools in such methods are statistical and probabilitistic in nature. We will look at some important examples of optimization based clustering in the following steps.

Proximity Measures

Proximity measures come in two forms: Dissimilarity and similarity measures. They give a measure of difference between two vectors, between one vector and one set of vectors, and between two sets of vectors. We begin with the case of proximity between two vectors, and provide as concrete examples the most common family of dissimilarity measures, measures, and two common, and related, similarity measures, the inner product and cosine similarity measures.

Dissimilarity Measures

Let and be two vectors.

Where for these are called unweighted measures. The unweighted measure is Euclidean distance. The unweighted measure is the Manhattan distance. The weights are used to make certain directions more important than others when calculating dissimilarity.

Inner Product Similarity Measure

Cosine Similarity Measure

Extending proximity measures to sets of vectors

Proximity measures between two vectors are simple to extend to cases between vectors and sets of vectors, and between two sets of vectors, using min and max (and mean, though this is less common) functions. Let be some proximity measure defined on vectors, and and be two sets of vectors:

Alternatively, some representor for the cluster, may be used. A common representor is the mean value of the vectors. In which case, we would have:

Euclidean Distance and High Dimensionality

It is common when working with a large number of dimensions (i.e. many columns in the data) to avoid dissimilarity measures such as Euclidean distance, and prefer alternatives such as cosine similarity.

This is because such dissimilarity measures fail to be discriminative as the number of dimensions increase. To explain things in a very hand-waving fashion, as the number of dimensions increase, points (from finite samples from some distribution) end up getting farther away from each other. The result is that in high dimensions, all points are a long way from all other points, and the difference in distances from a point to other points becomes a less useful way of distinguishing similar from dissimilar pairs. (In the general case, some researchers who have paid a lot of attention to this phenomenon claim that you can avoid this if you reduce fast enough as you increase dimensions, so you end up working with very small, fractional values. Most practitioners just use cosine similarity.)

If this seems odd to you, don’t worry - it seems odd to everyone. It turns out that our intuitions about distances that we have from visualizing the behaviour of sampling from distributions in one, two or three dimensions simply do not serve us very well in higher dimensions.

Sequential Clustering

Sequential clustering algorithms are the simplest type of clustering algorithms, and are typically very fast to run. The basic sequential clustering algorithm is:

For some specified threshold, , some dissimilarity measure (alternatively, some similarity measure, ) and, optionally, some limit on number of clusters, :

Setup:

Do:

  • For :

    • Find , such that ()

      • If () and then set ,

      • Otherwise set

Characteristics of this algorithm are:

  • A single clustering is returned

  • The number of clusters present is discovered by the algorithm

  • The result is dependent on the order the data vectors are presented

Hierarchical Clustering

Hierarchical clustering algorithms iteratively combine or divide clusters with use of the chosen proximity measure.

Agglomerative Hierarchical Clustering

  1. Initialize clusters such that , where , for .

  2. m=N

  3. While :

    1. Find such that (), , , .

    2. Set , $m=m-1$ and remove from .

Divisive Heirarchical Clustering

Divisive clustering works analagously to agglomerative clustering, but Begins with all data vectors in a single cluster which is then iteratively divided until all vectors belong to their own cluster.

Let . In words, gives all binary splits of , such that neither resulting subset is empty.

  1. Initialize clusters such that , where .

  2. m=1

  3. While :

    1. Find such that they are the solution to where (alt. where )

    2. Set

Characteristics of both types of hierarchical algorithms are: - A heirarchy of clusterings is returned - The final clustering is not chosen by the algorithm

Dendrograms

If we record the proximity values whenever clusters are divided or split in the above hierarchical algorithms, we can form a tree structure known as a dendrogram. A dendrogram gives an overview of the different sets of clusters existing at different levels of proximity. An example is given below.

Dendrogram

The above dendrogram has a property called monotonicity, meaning that each cluster is formed at higher dissimilarity level than any of its components. Not all clustering algorithms will produce monotonic dendrograms.

Selecting a clustering from the hierarchy

Since hierarchical clustering algorithms do not provide a single clustering but rather a hierarchy of possible clusterings, a choice is required about which member of the hierarchy should be selected.

Visual or analytic analysis of dendrograms can be used to make this decision. In this case, an important concept is the lifetime of the cluster, where this is the difference between the dissimilarity levels at which a cluster is merged with another and the level at which it was formed. We would seek to use the clusters present at a level of dissimilarity such that all clusters existing at that level have long lifetimes.

Another option is to measure the self-proximity, , of clusters, making use of set vs set proximity measures. For example, we might choose to define self-similarity as:

Where we remember that is Euclidean distance. We would then have to specify some threshold value , for self-similarity such that we take the clustering at level in the hierarchy if at level there exists some cluster, such that . In this and the following paragraph we treat the first layer in the hierarchy as the clustering where all vectors are given their own cluster, and the last when all vectors are assigned to the same cluster.

A popular choice is to try and read the self-similarity threshold from the data. For example we might choose the largest layer such that the following condition is fulfilled:

In words, the last layer where the dissimilarity of each pair of clusters is greater or equal to the self similarity of each of the clusters in the pair.

Graph Clustering

To look at basic graph clustering, we need to introduce a number of concepts. Firstly, a graph, , consists of a set of nodes, , and a set of edges . Graphs can be directed or undirected depending on if the edges between nodes is directed from one node to the other, or not. In directed graphs, the edges are ordered pairs of nodes and the edge is directed from the first node to the second. In undirected graphs, they are sets of two nodes.

The threshold graph of a data set is the undirected graph that results from associating a node with each data vector with edges between any two (non-identical) nodes whose associated data vectors are below some dissimilarity threshold (above some similarity threshold). The threshold graph is an unweighted graph, by which we mean its edges have no associated weight values.

Take the following data set:

Datum
1 7.5 8.9
2 4.5 13.1
3 6.4 9.1
4 2.6 14.7
5 5.1 10.2

Using Euclidean distance as our dissimilarity measure, the associated proximity matrix is:

Given a threshold of 3, the resulting threshold graph is:

Threshold Graph

Which can also be represented by the adjacency matrix:

A proximity graph is a threshold graph whose edges are weighted by the proximity of the associated data vectors. The proximity graph of the above data, with a threshold of 3, is:

Proximity Graph

Which can also be represented by the adjacency matrix:

A subgraph of a graph is another graph, such that and where are the edges in such that they connect a pair of nodes both of which are in . In proceeding, we will assume that all subgraphs are such that .

A subgraph, , is maximally connected if all pairs of nodes in are connected by an edge in .

We are now able to explain how basic graph clustering proceeds as extensions of the basic sequential and hierarchical clustering algorithms explained above. We first decide on some property , of subgraphs (examples will be given below), and use this as an additional constraint in the sequential or hierarchical clustering algorithms.

In the sequential algorithm, a datum is added to a cluster so long as the proximity measure satisfies some threshold. In basic sequential graph clustering, we would also check that the cluster resulting from the addition of the new data has a corresponding sub-graph that it is either maximally connected or satisfies some property, .

In the agglomerative hierarchical clustering algorithm, we choose which clusters to merge based on their proximity measures between current clusters. In basic hierarchical graph clustering, we do the same but consider only pairs of clusters such that the cluster resulting from their merger has a corresponding sub-graph that it is either maximally connected or satisfies some property, .

Basic examples of properties of subgraphs that can be included in include:

  • Node Degree: The node degree of a subgraph is the largest integer k such that every node in the subgraph has at least k incident edges.

  • Edge Connectivity: The edge connectivity of a subgraph is the largest integer k such that every pair of nodes in the subgraph is connected by at least k independent paths, where two paths are considered independent if they have no edges in common.

  • Node Connectivity: The node connectivity of a subgraph is the largest integer k such that every pair of nodes in the subgraph is connected by at least k independent paths, where two paths are considered independent if they have no nodes in common (excluding start and finish nodes).

Obviously, might include logically complex combinations of such properties, such as that a subgraph should have node degree of 4 or edge connectivity of 3.

Optimization-based methods

Optimization-based clustering methods specify a loss function. This loss function will take as arguments the training data and cluster assignment, and the aim will be to find the cluster assignment that minimizes the loss.

For example, the clusters may be identified with probability distributions taken to have generated the training data. The loss function is then the negative log likelihood of the training data given the clusters. We minimize this by finding the set of generative probability distributions (their parameter values, and perhaps the number of distributions) that make the data the most likely. Data vectors can be identified as ‘softly’ belonging to particular clusters in the sense of having different probabilities of being generated by the different distributions. We will see how this works in the next few steps.

It should be noted that this form of optimization-based clustering is not the only one. As well as thinking of clusters as generative distributions and datum membership of a cluster as the probability it was generated by the particular distribution, other optimization-based methods exist. For example, fuzzy clustering algorithms are typically optimization based.

Share this article:

This article is from the free online course:

Advanced Machine Learning

The Open University

Contact FutureLearn for Support