Want to keep learning?

This content is taken from the The University of Sheffield's online course, Building a Future with Robots. Join the course to learn more.

Skip to 0 minutes and 5 seconds If you’ve sent a robot out into the real world where there are people around, potentially also people who don’t really know anything about the robot - so a great example of that is a driverless car, driving out in a city, you’ve got pedestrians and things like that - the people expect the robot to behave in a certain way, so they don’t expect to be run over by it. They expect it to behave quite similar to how a person would behave. In other words, in a way that is responsible. A human driving a car doesn’t just try and run people over, because they have responsibilities. And you expect the robot to behave in the same sort of way.

Skip to 0 minutes and 37 seconds But that’s actually quite challenging for a robot to do, because a robot will do what it’s programmed to do. And so if you don’t program it in a way that it is then expecting to make these responsible decisions, it won’t do that. It will drive into things because it doesn’t know they’re there, or it doesn’t know how the environment’s going to change or anything like that. So it can be very challenging in the real world where the environment is very dynamic.

Skip to 1 minute and 1 second A great example of responsible decisions in the real world, if you like, that humans deal with every day is, if you’re driving down the motorway, and the car in front of you brakes suddenly, you’ve got a couple of choices you can make there. You can swerve, you can brake or you can do various things. Now, if you swerve out into another lane, you might then end up in front of another car, which may cause a bigger accident than if you just brake and stay in line. That is a very challenging set of thoughts that goes through the human brain.

Skip to 1 minute and 29 seconds And trying to do some of that stuff in robots is really quite challenging, because the process that you have to go through there is, you first of all have to sense what’s going on in the environment and sense what that means in the future. So, you’ve detected that the car in front is stopping. If you don’t do anything, you’re going to hit it, so you have to take some action. But then, rather than just taking any old action, you have to think through, what happens if I stop? Or what happens if I swerve? So what are the consequences of any action that I might take in the future?

Skip to 1 minute and 59 seconds And then, you kind of need to weigh those up to say which of these is the best action for me to take, the most responsible action for me to take, to not only minimise damage to myself, but minimise damage to other people in the environment. The human brain’s very good at doing this. We’ve sort of honed this skill of being responsible decision-makers. But robots are only just really starting to learn how to do this, and it’s down to us to program them in such a way that this kind of behaviour is inbuilt.

The challenges of making responsible decisions

So far, we have seen that in order for a robot to be autonomous it first needs to be able to sense its environment and then to respond to it. But how does a robot decide how to respond?

In this video, Owen explains why we need to design robots that are able to make responsible decisions.


What do you think responsibility means for a robot?

Share this video:

This video is from the free online course:

Building a Future with Robots

The University of Sheffield

Get a taste of this course

Find out what this course is like by previewing some of the course steps before you join: