Hurry, only 4 days left to get one year of Unlimited learning for £249.99 £174.99. New subscribers only. T&Cs apply

# Thresholding in Fiji

An overview of the auto-thresholding algorithms available in Fiji

In the previous article, we introduced the technique of binary thresholding, describing a classical approach, Otsu thresholding, and showing how to do this using Fiji. In this article we look at some of the other thresholding algorithms available in Fiji and briefly discuss their applications and assumptions.

## Auto-thresholding in Fiji

As we have seen, its easy to threshold an image using Fiji. First, convert to grayscale (unless it is already) using Image -> Type -> 8-Bit, then select Image -> Adjust -> Thresholding. After this, it is just a matter of selecting the type of thresholding and options that best fit your image data and type of analysis required.

We’ve already looked at Otsu which usually works well when your image has a clear bimodal distribution of two regions such as similarly coloured objects on a uniform background. But what about the other thresholding applications in Fiji? You can find a list of them linked below, along with descriptions. We will show the results of each using examples (or an example), and describe how a few of the methods work.

## First Example

Take a look at the images below, showing (left to right) an RGB colour image of a leaf, the grayscale version of the same image, and the histogram showing the distribution of pixel intensities.

Remember that what binary thresholding does is select a value to divide the image into two subsets of pixels, and so all any thresholding algorithm does is calculate where to place this dividing line for a given image – which intensity level to use as a decision point. You can think of this as placing a dividing line on the image histogram, and different algorithms will place this diving line in different places resulting in different results.

The reason we have different algorithms is they will assume the image data is distributed in different ways. For example, some algorithms will assume a certain percentage of the image is foreground, or that the image data forms a bi-model histogram (ie. has two peaks).

The figure below shows the results of each of the algorithms available in Fiji at the time of writing. Note that the box for ‘Dark Background’ was selected so that the object of interest (the leaf in this case) appears white on a black background. Some methods select all the pixels within the leaf, but also select areas that are the shadow of the leaf on the background. Other methods pick out the outline of the leaf clearly, but miss out parts of the stem, veins, and the brightest parts of the leaf.

The simplest type of methods just use a simple measure such as the mean pixel intensity (Mean), or selecting a threshold so that 50% of the pixels fall in each category (Percentile). The ImageJ help files provide helpful overviews of all the implemented approaches.

## How to choose a thresholding algorithm

This is a difficult question to answer and depends both on your image data and on what you are trying to measure. In the image used above for example, while some methods clearly are less suited to the problem, with others it is not so clear cut. Is it important that the outline of leaf is captured exactly? Or is more important to ensure all pixels within the interior of the leaf are classified as such (ie. no speckles of background pixels in the foreground region)? This will depend on the purpose of your experiment. If you have large data set with many images it’s always worth trying out a few methods with a few different images to ensure the thresholding method you pick is suited to your analysis. Care must be taken when thresholding as changes in e.g. lighting or orientation of the camera might change the assumptions of the algorithm in use, therefore causing thresholding to fail.

## Is binary thresholding always suitable?

Have a look at the images, histogram and thresholding results show below. Can you explain why binary thresholding is probably not the best method of capturing all the pixels of the “object” shown in the image?