Skip main navigation

The principles of sensing

In this article, Jonathan Aiken explores the principles of sensing.
A four-wheeled robot vehicle with off-road tyres looks out across our Field Robotics site
© The University of Sheffield

Sensing is one of the key pillars of robotics, leading to perception of the world, then cognition and action. Robots use sensors to acquire data. But how do they do this, and how do they process the data? This step introduces some of these important principles of sensing.

Why do we need sensors?

Let’s start looking at sensing by asking two simple questions: Where are you? And how do you know where you are?

These questions are relatively simple to answer for a person: as I write this, I’m sitting in front of my laptop, travelling to London and I know this because I travelled to the station and boarded the correct train for my ticket.

I’m able to understand my environment and make decisions based on very abstract concepts that I observe. I can observe what’s happening around me, and use my senses to orientate myself in the environment based on that information.

For a robot, the process is very similar. The first step is to acquire raw data from sensors. The second step is to organise and understand that data, using perception. Whilst this problem sounds easy to us, it is often challenging to robots.

Computers are the robot brain, and they’re digital

The first step in sensing involves transforming the world into something that the brain of the robot, the computer, will understand.

Computers are digital, meaning that at the very lowest level they only use 0s and 1s to represent data. All operations simply involve shifting these two values around. Any representation of the world, at the very simplest level, must be built from these values for a computer to make sense of it.

Therefore, the act of sensing can be thought of as transforming the real world into a discrete sequence of values, or samples, that represent a particular quantity (e.g. images, audio, velocity, temperature etc.).

In the process of sensing, we sample the environment and provide a snapshot of the world at that moment in time. As we cannot sample everything about the world, this is not a complete model. It only represents the local environment, and probably isn’t even a complete representation of that.

Sensing for a stranded robot

Imagine a robot stranded somewhere in the world. What sensing process can the robot use to understand where it is?

GPS. In the outdoor world, the Global Positioning System (GPS) is the most common and easy to use technique to estimate position. By measuring the time delay to a collection of satellites which are fixed in orbit above locations on the globe, and using triangulation, a position can be determined to an accuracy of a few metres.
GPS is sufficient to give an approximate location (to an accuracy of about 3.5 metres). This is useful but could be problematic e.g. if we wished it to cross even a moderately wide footbridge. It also does not provide any information about the local environment so we could not avoid a person walking across the bridge.
Therefore we need to get more detailed information about the surroundings. Typically this is through using devices like RADAR (Radio Detection and Ranging) or even better, LIDAR (Light Detection and Ranging).
RADAR. Radar transmits a radio frequency beam which is reflected when it comes into contact with an object. By measuring the reflections, objects can be detected.
This system doesn’t scale well to small objects so doesn’t solve the navigation problem for our robot.
LIDAR. Lidar is an object detection method similar to radar, which uses light instead of radio signals to detect objects. Lidar measures reflected light emitted by a laser, which has a much smaller wavelength than radio signals, and so lidar can detect smaller objects than radar.
So, to solve the problem of our stranded robot, we can use a combination of sensing: GPS for approximate localisation in the world, radar for detecting large objects nearby like a building, and lidar to detect smaller objects, like people. These sensors can be combined using sensor fusion algorithms to build a complete picture of the world.


What factors do you think could impact the reliability of sensors in the real world?
© The University of Sheffield
This article is from the free online

Building a Future with Robots

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now