Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

What is a digital image?

What is a digital image comprised of? How is the data arranged and stored.
An image of colourful plant leaves. The right hand side is good quility, the left hand side is low resolution and individial pixels can be seen

No doubt you will have encountered digital images on countless occasions, probably also taken and used digital images in your work or elsewhere, and possibly even done some post-processing of an image’s size or appearance using some piece of computer software or other. But exactly what data is stored in a image file, and what format does it take?

You may be aware that digital images are made up of a mosaic or grid of individual pixels, which may take a range of different colours or levels of grey for a black and white or grayscale image. The more pixels an image has higher the image resolution, and the fewer the pixels an image has the lower the resolution. If the resolution is so low that individual pixels can be clearly seen so the image becomes distorted, the image may be said to be pixelated.

So how are the colour or grayscale values of pixels in a digital image represented? A little bit on the physics and biology of how we perceive colour is useful here. The human eye can see colour due to the presence of specialised cells known cone-cells in the retina. There are three types which, roughly speaking, detect light in the blue, green and red light ranges of the electromagnetic spectrum (labelled S for short, M for medium and L for long in the image below). The lack of one or more of these cone-cell types is the cause of colour-blindness in some individuals.

A plot of human vision response in the Short/Blue, Medium/Green, and Long/Red versus light wavelength

By mixing the relative level of light of just the three colours red, green and blue, the full spectrum of colours can be produced. For example mixing equal amounts of red and green produces yellow, red and blue produces magenta, green and blue produces cyan and equal amounts of red, green and blue produces white light (Figure 2). Representation of colour using differing amounts of light in the red (R), green (G) and blue (B) channels is known as RGB-colour.

In general, computers simply represent RGB colour values using a list of three numbers representing the level of each colour. The range of possible values is important here. Usually, colour images have so-called 8-bit colour depth meaning the number for each channel can take any value between 0 and 255. The higher the number, the higher the intensity of light of that colour. So for example, a pure red pixel can be represented by the numbers (255,0,0), and a yellow pixel represented by (255,255,0). Of course, most pixels are a mix of different values for each channels, and representing colours in this way allows for 256 x 256 x 256 possible combinations, over 16 million unique colours!

The 8-bit name is a reference to the amount of memory, using binary numbers, that a pixel takes up
Binary represents numbers as ones and zeros and 8-bit refers to a number that is eight digits long
A three digit number in our usual base-10 or decimal system can take 103 = 1000 different values (0 to 999)
An 8-bit binary number can take 28 = 256 different values
A 16-bit binary number can take 216 = 65536 values.

While colour pixels are represented by three numbers, grayscale pixels are represented with just one. For an 8-bit grayscale image this number is also between 0 and 255, with 0 appearing black and 255 white. For images where a larger range of light intensities is needed, a 16-bit image depth may be used, giving a possible 65536 unique values. However, this extra range comes at the cost of much greater file size.

So now we know how to represent colours using pixels, all we need to do is arrange those pixels into ordered rows and columns to make up an image. A computer does this using a data structure called an array, where the data in each individual pixel can be accessed by referencing its row and column position. For more on this see the Python practical examples.

An image of colourful plant leaves showing individual pixel, overlain with a diagram showing how the iamge data is stored as RGB colour channels with X and Y coordinates.

A good way to think of an array is to think of a table in a spreadsheet. You have a number of rows and columns into which you can put data, and to get that data back you just need to provide a reference to both the row and column positions. In fact, it is even possible to turn a digital image into a spreadsheet using Python, as we will see next week.

Summary

  • Digital images are made up of pixels arranged in rows and columns, stored in computers as arrays
  • Each pixel in a colour image consists of three numbers based on the intensity of Red, Green and Blue light
  • Grayscale images have just a single number representing the intensity of white colour
  • Usually images are 8-bit, meaning pixel values are limited to be between 0 and 255
  • 16-bit images can store a much larger range of number values, but this comes at the cost of image file size
This article is from the free online

Introduction to Image Analysis for Plant Phenotyping

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now