Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £29.99 £19.99. New subscribers only. T&Cs apply

Find out more

An introduction to safety assurance and ethics

Computer-controlled cars are almost here. Many production cars now include software features to assist human drivers.
Computer-controlled cars are almost here. Many production cars now include software features
to assist human drivers: your car may be able to park itself, keep in a lane, or change its speed in response to traffic and road conditions. Fully-autonomous driving functionality
takes this a stage further: the vision is for a vehicle to drive itself, without intervention from a human driver, on public roads, interacting with other road users such as other vehicles (some with human drivers), pedestrians, cyclists, animals. This vision is increasingly nearing reality. In 2020, Waymo began offering robotaxi rides to the public in Phoenix, Arizona. Tesla has said that it will offer a “full self-driving” subscription service to private car owners in 2021. At least two trials of public robotaxis have been launched in China. In May 2021, the UK Government announced legislation to allow the use of automated lane-keeping functionality in certain circumstances on UK motorways - it is expected that further autonomous functionalities will be introduced gradually.
Even if the technology works perfectly, there are several other challenges that we need to address before driverless vehicles can be accepted as a normal part of our lives. We need to think about safety - for the people in the car and for other road users. Have you ever stopped to think about what it means to say that a vehicle - or any other computer-based system - is “safe”? How can we convince the people who make decisions about whether self-driving vehicles can be allowed on the roads - lawmakers, governments, certification bodies - that the technology is fit-for-purpose? This is especially challenging for AI-based systems, where it is not always possible to be completely sure how the software is making decisions.
How can we convince the general public - who will be passengers in the vehicles or will share the road with them - that the vehicles are “safe” enough for them to trust them with their lives? Safety is not the only challenge, though. Driverless vehicles are potentially a “disruptive innovation”, which could have a huge effect on our society. There will certainly be some advantages. For example, older people and those with disabilities will find it easier to live independently and explore the world. However, it’s also worth thinking about the social changes that will be required for us to live alongside driverless vehicles in our towns and cities.
There are implications here not just for infrastructure and planning to support new models of transport, but for the way we work, for energy consumption, for our freedom to come and go as we please and potentially even for our minds and bodies. The decision to adopt these technologies raises ethical challenges which apply to many computer-based and AI systems. It’s something we all have to think about.

Computer-controlled cars are almost here. Many production cars now include software features to assist human drivers.

Dr Jenn Chubb talks about the safety assurance and ethics issues that we face.

This article is from the free online

Intelligent Systems: An Introduction to Deep Learning and Autonomous Systems

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now