Skip main navigation

£199.99 £139.99 for one year of Unlimited learning. Offer ends on 28 February 2023 at 23:59 (UTC). T&Cs apply

Find out more

Humans making responsible decisions

Noel Sharkey talks about some of the ethical considerations that humans have to make regarding using robots.
Of course, one of the hopes for the future is autonomous cars, autonomous vehicles. And there’s a lot of talk about them saving a lot of lives on the road. I kind of tend to believe that, that in the long run, that will be the case. Google is the leading manufacturer of autonomous cars. And the thing about the Google car is that the new one drives at 25 miles an hour and has no steering wheel at all. So blind people can get into it, whatever. It’s quite an amazing thing if you think about it. But the state of California have now made them put a driving wheel on it, because they’re concerned about how it will operate without one.
The law is you have to sit with your hands above the steering wheel, the car drives itself, and be ready to grab it if there’s an emergency. Now, it’s hard enough to concentrate for two or three hours in the motorway, never mind sitting there when you’re not in control at all. People are going to drift away. They’re going to start looking at their phone - people already look at their phone when they’re driving - start looking at their phone, reading. Suddenly, an emergency comes up, they’re not going to be able to handle it. Google is starting to realise this. This is what happens with autopilots in planes.
Suddenly, you ask the pilot to get involved, and the plane crashes because they’re not ready for it. What happens when one of these cars get in an accident? There are some really crazy things being said at the moment in the United States. I mean, there are about ten car manufacturers with these, and they’re kind of on the brink at the moment. But the idea of having an accident is that the car will work on this kind of ethics called consequentialist ethics, where you try and minimise the number of people damaged. So the car comes into an accident, searches around.
There’s one guy standing over there on his bicycle, there’s a group of children over at the other side, and there’s a group of old people in a bus. The car then decides to hit the guy who’s standing alone. And I can’t bear this because it’s the idea is that - what has happened now - the car has legally become a weapon, because it’s targeting a particular individual. And I don’t know what they’re going to do about that at all, I have no idea. You could make it random, but that’s not a good solution either. But then there’s a lot of worries about accountability. Google are the only people in California who insure their car.
They have to be totally liable, they’ve said. And they’re not being allowed to sell them either under new laws. They can only lease them. But they have got to be completely liable for any accidents, because what happens if a car was parking, for instance. It’s got sensors on it, parking, a child steps in front of the car. A truck comes past with a lot of mud on it, throws the mud over the sensors as it’s driving past, and the car hits the child and kills them. Who’s responsible? Is it the truck driver? Is it the manufacturer? Is it the driver should keep the sensors clean? So, there’s a whole string of problems there, it’s not settled yet.
One of the big concerns for me at the moment is the use of robots in military. And you have lots of different kinds. You’ve got bomb disposal robots, which are very good for protecting our soldiers. And you have robots that are remote controlled, such as the drones that are flying around, although they are what you call semi-autonomous, so they fly themselves to a certain extent. But when you’re targeting, you fly them down and find a target. But the worrying bit, really, for me at the moment, is that several countries are developing autonomous weapon systems, and that’s robot weapons that can go out on their own and find their own targets, and kill them without any human supervision.
The UK have the Taranis intercontinental combat aircraft made by BAE Systems, and that’s been tested extensively in Australia. That can find its own targets and kill them. The United States is the biggest player. They have the X-47B, which is also a fully autonomous aircraft. They have ground vehicles, large truck-like ones that can be armed. They have autonomous submarines now and gunboats. Israel’s another big player. They’re hoping that within the next five years they can have a full army, navy and air force that are all autonomous to go out and fight for them. China are getting really big on it, and Russia are pushing it very hard. They’re actually trying to make their super tank autonomous at the moment.
Of course, from military robots it’s spreading into the civilian world, and that’s an even bigger concern. So we’ve been talking to the Human Rights Council at the UN about that, because police have been using robots for some time. And it’s not all bad, of course, there’s some very good uses. Bomb disposal where I’m from, in Northern Ireland, was very useful. But also, SWAT teams in the United States have been using robots for some time. They are bomb disposal robots, but they’re using them, for instance, in a hostage situation. They can send a phone in, rather than send an officer in and risk their lives. They can send in pizzas. They can film what’s going on in there.
And quite often, people will surrender when a robot comes in, having stood off the police. They just see a robot and think, ‘Oh, my god, I’ll put my hands up’ and leave. And then you’ve got drones, which we’re using extensively in this country - little ones - about this size. I’ve flown them. They’re really sophisticated. And you just press a button saying ‘Take Off,’ and you point on the map where you want it to go, make a little circle with your finger, and it will hover round there. And again, our police say that it could be used for dangerous crime, for terrorism, for finding lost children - all these sob stories - and they’re right.
And also, they talk about protecting our borders. But when The Guardian, for instance, did a Freedom of Information to get the transcripts from a meeting between four of our police forces and the Home Office and BAE Systems, they found that they really had a long list of things like flyposting, fly-tipping, anti-social behaviour. So it’s a phishing expedition. And the one thing that I push to the police all the time in debates is let’s make sure that it doesn’t intrude on everybody’s privacy by having you sign out a warrant written by a judge. So you go to a judge, for each use, and sign it out like we do for weapons and use it for that case-by-case purpose.
There’s a company in the United States called Chaos Moon [sic: Choatic Moon], who are developing robots that are fully autonomous. It hovers around your property and if someone comes in it says ‘Go away,’ and if they don’t go away it tasers them and keeps them there with the alarm going off until someone comes. Which of course - tasers can be lethal, they are to a lot of people. They demonstrate this on the office intern, but on the video they show him getting a full medical examination. And of course, anybody coming into your property won’t, so if I have a bad heart, I’ve had it.
There’s a company in South Africa called Desert Storm [sic: Desert Wolf], who made them for 25 mining companies, and they fire pepper spray and plastic balls. And that’s for breaking up protests from the miners. It was so popular, that they’ve had to open two new factories, one in Oman and one in Brazil. So they’re selling them abroad to law enforcement agencies, they won’t say who. We know that China and Turkey have bought them. North Carolina, in the United States, in September, passed a bill saying they would allow the police to arm their drones with tasers and fire plastic bullets. So it’s coming and it could come here.
If, for instance, we have a situation like Northern Ireland again, they’d be so useful, they would be used all the time. So, the thing I want to stop is them being armed and used against the population, because less-than-lethal weapons are actually lethal. They’re weapons.

We’ve talked about the importance of designing a robot to make responsible decisions, but what about making responsible decisions when designing robots?

In this video, Professor Noel Sharkey talks about some important ethical considerations for developing autonomous robots.

Noel is Emeritus Professor of Robotics and Artificial Intelligence at The University of Sheffield, co-founder of the International Committee for Robot Arms Control and co-founder of Responsible Robotics.


How do you think we can ensure a safe and ethical future with robots?
This article is from the free online

Building a Future with Robots

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education