Skip main navigation

How can robots make responsible decisions?

Professor Sandor Veres outlines some of the challenges of designing robots to make responsible decisions. Let's explore.
A wheeled robot with a mounted arm
© Sheffield Robotics

Before acting, robots need to plan and make decisions, which is a key part of cognition. But how do robots plan actions that will achieve their goals, whilst guaranteeing the safety of those around them?

In this article, we’ll look at some of the ways that robots can be programmed to make responsible decisions.

The benefit of foresight

As humans, we have the ability to study a situation and make sure we understand its dynamics before making a responsible decision. Understanding environmental dynamics means we have the foresight to predict the consequences of our actions and the actions of others, as well as events in nature.

If whilst driving, we notice a roadblock up ahead, we will start to slow down so that we can come to a gentle stop. This is an example of foresight, which is essential for making responsible decisions in complex environments.

Unfortunately for robots, foresight is not so simple. It can be built in part, as we have seen already this week, by interpreting the environment. But the robot will need a sufficiently detailed and complete model of the environment, with rapidly updated positions of the other vehicles and pedestrians.

Predicting the consequences of planned actions

The consequences of a robot’s actions can vary. They may result in:

  • a physical change of the environment
  • a change in the robot’s location
  • legal consequences (such as a drone not being allowed to fly over a specific area)
  • social consequences (such as causing a nuisance to neighbours)

There are two computational approaches for predicting the consequences of a robot’s actions:

  1. We can physically simulate the robot’s actions in the environment to predict the future physical state of the world.
  2. We can use logic to infer legal and social consequences in terms of breaking rules of behaviour.

Planning while keeping to rules

If concrete actions are not planned, the robot could use mathematical optimisation methods to decide how to act.

Mathematical optimisation is a technique for achieving the best possible result under given circumstances, called constraints.
This type of planning can go beyond computing the consequences: it can optimise the consequences of the action to be taken for the best outcomes.
Game theory is a type of planning where each robot is considered to be a player of a game and receives rewards dependent on the actions of the whole robotic team. We’ll be looking at game theory in more detail in Week 3 when we investigate how robots can work together.

Using a well-defined set of primitive actions

Planning in continuous time for long periods can be computationally heavy. Planning can be simplified if the robot performs only a limited set of well-defined actions.
Primitive actions can reduce planning complexity by separating one long, complicated task into a set of smaller and simpler discrete sections. These sections are like building blocks that can be chained together to form complex behaviour.
Example: Consider how you navigate through a building. “If I move straight along this corridor, turn left and go through the first door on the right, then I will be in the kitchen”.
The primitive actions in this example are:
  1. “going straight”,
  2. “turning left”,
  3. “Going through a door”.
These are much simpler to sequence together as a plan than breaking down the problem into a large number of individual walking steps. Note that the primitive actions have sufficient detail to define and accomplish the task but are not overly detailed.
The associated discrete abstractions of perceptions are “recognise the end of the corridor”, “finding the first door on the right” and “being in the kitchen”.

Symbolic planning

Symbolic planning uses a rapidly updated model of the environment to decide on the most appropriate action of the robot.

Symbolic planning also needs to include rules of behaviour and take into account the functional, safety and societal consequences of actions taken by the robot.

For instance, robots searching a building need to take into account the damage their search may cause, their own safety in case of a burning building, or the level of the disturbance they may cause to related human activity.

All this can be based on symbolic computations about the timing of events (temporal logic) and about knowledge and intent by others (epistemic logic).

© The University of Sheffield
This article is from the free online

Building a Future with Robots

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now