Skip main navigation

Range of binary numbers

Explore how the range of values that a binary number can represent depends on the number of bits available.

In this step you will explore how the range of values that a binary number can represent depends on the number of bits available.

With denary numbers, each time we increase the number of digits available, the range of values is multiplied by 10 because denary is in base 10. For example, when there is one digit available in denary, the range of values is 0—9, so there are 10 different values.

By increasing the number of digits by one, the range has now multiplied by 10 from 0—9 (10 values) to 0—99 (100 values). Adding a third digit multiplies the range by 10 again to 0—999 (1000 values).

Increasing the range of binary numbers

Since binary is in base 2, each additional binary digit (bit) will multiply the range of numbers by 2.

For example, the range of values of a 5 bit binary number are 0—31, so 32 different values. The smallest value is represented as five zeros:

16 8 4 2 1
0 0 0 0 0

and the largest number is represented as five 1s:

16 8 4 2 1
1 1 1 1 1

which in denary is 16 + 8 + 4 + 2 + 1 = 31

If the number of bits increases by 1, from five to six, the range doubles from 0—31 (32 values) to 0—63 (64 values):

32 16 8 4 2 1
1 1 1 1 1 1

which in denary is 32 + 16 + 8 + 4 + 2 + 1 = 63

What do you think the range would be for an 8 bit number?

128 64 32 16 8 4 2 1
               

The maximum value that could be represented by an 8 bit number is 255, so the range would be 0—255 (256 values).

128 64 32 16 8 4 2 1
1 1 1 1 1 1 1 1

You can work the number of values quickly by calculating 2n, where n is the number of bits available, for example 28 = 256 values. The range of values is from 0 to 2n – 1, for example 0 to 28 – 1 = 0—255.

Why is the range important?

There are certain rules that computers follow which are based around the number of bits available for a binary number. This is key to how computers communicate and interpret binary numbers to represent things digitally; from text to images to sound.

Character sets

The American Standard Code for Information Interchange (ASCII) was developed to create an international standard for encoding the Latin alphabet, and was adopted in 1963 so information could be interpreted between computers.

The ASCII character set uses 7 bits, allowing for 128 different characters. These were used to represent lower-case and upper-case letters in the English alphabet, numbers, symbols, and some commands.

Moving to 8-bit computing technology meant there was one extra bit to be used. With this extra digit, Extended ASCII encoded up to 256 characters, which is useful for most European languages.

Unicode was developed to solve the problem of representing characters from languages across the world. The Unicode encoding method UTF-8 uses between 8 and 32 bits per character. This allows for over 1 million characters to be represented, covering the characters you would find in many languages.

RGB colours

Another example is the RGB (red, blue, green) system that is often used for computer screens and coloured lighting. Each separate colour is represented by 8 bits and are measured on a scale from 0 (“none of this colour”) to 255 (“the most intense colour possible”). This means there are 256 shades of each colour of light (256 red, 256 blue, and 256 green).

A very large variety of colours can be represented in this way. Because red, green, and blue can each be shown at different intensities, these colours can, when mixed together, create over 16 million unique shades:

256 reds × 256 greens × 256 blues = 16777216 shades of colour

The number of colours is linked directly to the number of available bits.

What’s the range?

Try to work out the range of values for the following number of bits and post your answers in the comments:

  • 10
  • 32
  • 64

In the next step you will look at binary shifts, and how shifting a binary number left or right affects the value.

This article is from the free online

Understanding Maths and Logic in Computer Science

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now