A four-wheeled robot vehicle with off-road tyres looks out across our Field Robotics site
Looking out across our Field Robotics site

The principles of sensing

Let’s start looking at sensing by asking two simple questions: Where are you? And how do you know? The first question is relatively simple to answer; as I write this, I’m sitting in front of my laptop, travelling to London and I know this because I travelled to the station and boarded the correct train for my ticket. I’m able to understand my environment and make decisions based on very abstract concepts that I observe. I can observe what’s happening around me, and use my senses to orientate myself in the environment based on that information.

For a robot, the process is very similar, but unlike us, it doesn’t have eyes, ears or a brain in the same sense we do. The first step of this process is to build an understanding of what is around. Whilst this problem sounds easy to us, it always proves an issue to robots. If a robot is driving down a corridor and doesn’t sense that there is a person standing in the way, it will simply hit them – how could it do anything else? It didn’t know they were there!

The first step in sensing involves transforming the world into something that the computer will understand. At the very lowest level computers only use 0s and 1s. All operation simply involves shifting these two values around. Any representation of the world, at the very simplest level, must be built from these for a computer to make sense of it. Therefore, the act of sensing can be thought of as transforming the real world into a series of values or samples that represents a particular quantity. An easy example of this is temperature, which can be measured using a thermocouple. This converts temperature proportionally into a voltage difference which can then be measured and converted into a binary representation.

However, for a robot to understand the world, it’s a more complex situation. The world is continuous, this means that changes happen continually. A computer processes information by sampling it, between these samples, things can change and will not be registered. Consider a time-lapse CCTV (or security) camera which takes an image every second. If this was to capture a person walking across the view, that person would appear to ‘jump’ between each frame, and if they change their speed, especially randomly and with large variations, it will be very difficult to follow them. Therefore, it is important that the rate that sampling is undertaken is appropriate for the maximum speed we expect the world to change. In fact, this must be at twice the rate of the maximum frequency that we need to observe (this is called Nyquist–Shannon sampling theorem). This ensures that we will capture enough information to be able to understand the signal, even when variations are happening at their quickest.

Therefore by sensing, we sample the environment and provide a snapshot of the world at that moment. As we cannot sample everything about the world, this is not a complete model. It only represents the local environment, and probably isn’t a complete representation of that.

Let’s return to our robot stranded somewhere in the world - what process can it use to understand where it is? In the outdoor world, the Global Positioning System (GPS), is the most common and easy to use technique. By measuring the time delay to a collection of satellites, which are fixed in orbit above locations on the globe, and using triangulation, the position can be determined to an accuracy of a few meters. This is enough to give an initial position, in the case of a car this is enough to determine a road name and therefore some of the surroundings. However, it isn’t completely sufficient for navigation as the horizontal accuracy is typically only better than 3.5m. Therefore in the worst case, our robot could think it’s at least a few metres from where we hope it would be. This could have major consequences if we wished it to cross even a moderately wide footbridge. It also does not provide any information about the local environment so we couldn’t avoid a person walking across the bridge.

Therefore we need to get more detailed information about the surroundings. Typically this is through using devices like lidars (Light Detection and Ranging). Lidar is an object detection method similar to radar (Radio Detection and Ranging), which uses light instead of radio signals in order to better detect small objects. Radar transmits a radio frequency beam which is reflected when it comes into contact with an object. By measuring the reflections, objects can be detected. This system doesn’t scale well to small objects so isn’t much use to our robots. Lidar uses a similar principle but measures reflected light from a laser which has a much smaller wavelength (or higher frequency) so can detect smaller objects. This provides a fine-grained detection which can provide information about the world that is close by. It provides knowledge about obstacles that may be in the way of the robot and enables local navigation around them.

In the next step, we’ll look at the different kinds of sensors that have been fitted to a drone so that it can build a picture of its surroundings.

Share this article:

This article is from the free online course:

Building a Future with Robots

The University of Sheffield

Get a taste of this course

Find out what this course is like by previewing some of the course steps before you join: