Skip main navigation

Hyperspectral data

A brief overview of hyperspectral image data and its uses

In a similar way to how we can extend 2D images to be 3D or volumetric, therefore containing more information about 3D structure, we can also increase the information we store about colour.

As we saw earlier, most images we work with are greyscale, containing only information on brightness, or RGB, containing 3 elements representing the amount of red, green and blue light at that point in the scene.

Using specialist hyperspectral imaging spectrometers, we can record much more detailed information about colour. Instead of measuring the amount of light at just three points on the spectrum, such devices allow as to record the whole spectrum – a complete rainbow if you will – across a particular range. Therefore, for each pixel, we can have a plot showing us the complete reflectance properties across part of the spectrum – see the figure below as an example.

Hyperspectral imaging devices work across a specified subsection of the electromagnetic spectrum. Some work in what is referred to as visible light (around 400-700nm in wavelength). Some devices record light beyond what we can see, for example into the near infrared part of the spectrum (NIR, 700-1000nm). This extra data can be used to learn more about the scene being imaged. For example, a variety of vegetative indices can be easily calculated (as described in a later article), and even, depending on the wavelengths capture, physical properties of the objects in the image can be inferred.

There are two main downsides to hyperspectral imaging – beyond the cost of the imaging spectrometer itself of course. First, the extra data comes at a cost – hyperspectral images are large in size – potentially gigabytes per image. The useful signal – the particular combination of colours which tell you something interesting about the object – can be buried in a huge amount of useless data, so one challenge is determining how to analyse it to extract the useful information. Visualisation of the data can also be challenging; one method is to allow the user to select pixels and view the plot at that point. Another is to display the image as a stack of greyscale slices, allowing the user to select which wavelength is currently displayed as a simple greyscale intensity image.

The second downside is often acquisition time, and the process to acquire the images. Some hyperspectral imagers work like scanners; they must be moved across a scene to capture information a line at a time. To do this, either the device must be moved across a scene by mounting on a vehicle (such as an aircraft or tractor), or robotics must be used to scan the device across plants. The capture process also requires careful consideration of lighting. Of course, objects will only reflect light is it is presented to them – if particular bulbs are ‘dark’ in select areas of the spectrum, this can induce missing data in the spectral response – in the extreme case as an example, a blue bulb will not of course reflect any red light from a white surface.

This article is from the free online

Introduction to Image Analysis for Plant Phenotyping

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now