Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £35.99 £24.99. New subscribers only T&Cs apply

Find out more

The challenges of making responsible decisions

Owen explains why we need to design robots that are able to make responsible decisions.
If you’ve sent a robot out into the real world where there are people around, potentially also people who don’t really know anything about the robot - so a great example of that is a driverless car, driving out in a city, you’ve got pedestrians and things like that - the people expect the robot to behave in a certain way, so they don’t expect to be run over by it. They expect it to behave quite similar to how a person would behave. In other words, in a way that is responsible. A human driving a car doesn’t just try and run people over, because they have responsibilities. And you expect the robot to behave in the same sort of way.
But that’s actually quite challenging for a robot to do, because a robot will do what it’s programmed to do. And so if you don’t program it in a way that it is then expecting to make these responsible decisions, it won’t do that. It will drive into things because it doesn’t know they’re there, or it doesn’t know how the environment’s going to change or anything like that. So it can be very challenging in the real world where the environment is very dynamic.
A great example of responsible decisions in the real world, if you like, that humans deal with every day is, if you’re driving down the motorway, and the car in front of you brakes suddenly, you’ve got a couple of choices you can make there. You can swerve, you can brake or you can do various things. Now, if you swerve out into another lane, you might then end up in front of another car, which may cause a bigger accident than if you just brake and stay in line. That is a very challenging set of thoughts that goes through the human brain.
And trying to do some of that stuff in robots is really quite challenging, because the process that you have to go through there is, you first of all have to sense what’s going on in the environment and sense what that means in the future. So, you’ve detected that the car in front is stopping. If you don’t do anything, you’re going to hit it, so you have to take some action. But then, rather than just taking any old action, you have to think through, what happens if I stop? Or what happens if I swerve? So what are the consequences of any action that I might take in the future?
And then, you kind of need to weigh those up to say which of these is the best action for me to take, the most responsible action for me to take, to not only minimise damage to myself, but minimise damage to other people in the environment. The human brain’s very good at doing this. We’ve sort of honed this skill of being responsible decision-makers. But robots are only just really starting to learn how to do this, and it’s down to us to program them in such a way that this kind of behaviour is inbuilt.

So far, we have seen that in order for a robot to be autonomous it first needs to be able to sense its environment and then to respond to it. But how does a robot decide how to respond?

In this video, Owen explains why we need to design robots that are able to make responsible decisions.


What do you think responsibility means for a robot?
This article is from the free online

Building a Future with Robots

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now