Skip main navigation

Uncertainty: understanding the two types

In this article, Professor David Brayshaw explains the difference between aleatory and epistemic uncertainty.

Dealing with uncertainty is unavoidable when using climate information, particularly when you’re applying it in a highly localised and business-specific way. It’s a big and complex topic and a full discussion is well beyond the scope of this course. Over the next few Steps we will, however, introduce the two types of uncertainty involved, discuss the use of climate model ensembles and show how to interpret climate uncertainty in your impact models.

Aleatory vs. epistemic uncertainty

It’s helpful to recognise that there are two types of uncertainty and to understand the differences between them.

Aleatory uncertainty refers to an inherent and irreducible randomness in a process or outcome. A classic example is the roll of a die: the occurrence of numbers 1-6 is random and, assuming the die is ‘fair’, all numbers occur with equal frequency. Taking this example further, consider using a series of die rolls to estimate the ‘average’ value on the face of a fair die. The true answer is clearly 3.5 = (1 + 2 + 3 + 4 + 5 + 6) / 6.

illustration of 8 dice with the following numbers face-up: 1,3,4,2,6,6,4,2 giving an average roll of 3.5

Applying this example to a model simulation, each individual roll is equivalent to a new datapoint. Each datapoint will have some error (the value of the individual roll will be either above or below our ‘true’ value of 3.5) but the errors will be unbiased – each roll is equally likely to be above or below the true value by equal amounts. If an average is taken over a sufficient number of trials with our model, the effects of these errors will tend to cancel out and the experimental average will tend toward the true value of 3.5.

In contrast to this, epistemic uncertainty refers to a lack of knowledge. This uncertainty could, in principle, be reduced by further investigation but – crucially in the context of climate information – the errors it produces cannot be assumed to be random and unbiased. There is no guarantee that averaging over a large set of individual estimates gives a good estimate of an underlying true value because all the estimates may be biased in the same way. To continue the dice analogy, if we lack knowledge about the die’s true behaviour and construct a ‘model’ of the real die containing a bias (perhaps the model die rolls more 1-3’s than 4-6’s) or a deficiency (or perhaps the model die has one fewer face with scores from 1-5) then no matter how many time this model die is rolled, the corresponding estimate of the truth will remain biased.

An illustration of epistemic uncertainty. We attempted to construct a model of our die but didn’t know that it had 6 sides. The ‘model’ die therefore only shows numbers 1-5. After many trials we recover a biased estimate of the average (mean), with a score of 3.

Aleatory and epistemic uncertainties are fundamentally different in nature and require different approaches to address. There are well developed statistical techniques for tackling aleatory uncertainty (such as Monte-Carlo methods), but handing epistemic uncertainty in climate information remains a major challenge.

In the next Step you’ll discover some of the implications for producing high quality climate information for business.

© University of Reading
This article is from the free online

Climate Intelligence: Using Climate Data to Improve Business Decision-Making

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now