Skip main navigation

End-to-end autonomous driving

What rules do self-driving cars need in order to naviagate in highly complex real-world road environments? Watch Dr Will Smith explain more.
6.7
We’ve seen some examples of using simple rules to navigate in very simple environments. But what about in the highly complex real-world road environments that a self-driving car must navigate? Here, we must deal with other moving vehicles, pedestrians, bikes, multiple lanes, road signs, traffic lights and so on. Performance must be reliable whether it’s day or night, sunny or raining, foggy or snowing. Trying to deal with such complexity using hand engineered rules quickly becomes impossible. We’ve already seen how deep convolutional neural networks can be used to predict a value or class from an input image.
43.6
Because we learn the mapping from image to desired output using real data, our system learns robustness to the sorts of clutter, noise and complexity that we encounter in the real-world. We’ll now see how this approach can be powerful enough to provide basic self-driving capability in real world environments. The idea is actually very simple. We feed into our network frames from a video camera that provides a drivers-eye view of the road. The network will learn to predict the correct steering controls for the current situation. This might be as simple as what angle the steering wheel should be rotated and how much the brake or accelerator should be applied.
84.1
Then, to drive, we simply connect the neural network to the camera and driving control interface and allow it to map the video camera feed into steering controls. We call this end-to-end autonomous driving since the neural network is learning the entire process, from one end (input images) to the other end (driving controls) with no hand engineered systems in between. The first and most obvious question
110.4
is: where do we get the training data to train such a system? Actually, this is, in principle, straightforward. We equip a car with a camera and some sensors to measure the state of the steering controls. Then, we have humans drive the car and store images along with the driving controls that the human applied when the image was taken. These form training data pairs for our system. Surprisingly, such a simple approach actually works, at least to a limited extent. A car trained like this can stay in its lane, take corners, slow down at intersections and cope with a fairly wide range of environmental conditions. However, it cannot provide reliable driving over longer time periods.
153.8
A single camera may not provide enough information about the environment, processing each frame independently can lead to an erratic driving style and there are many limitations of a fully end-to-end approach. Finally, the system has no goal - we have not provided a destination and the system cannot form a route plan using a map.

What rules do self-driving cars need in order to naviagate in highly complex real-world road environments?

Watch Dr Will Smith explain more.

This article is from the free online

Intelligent Systems: An Introduction to Deep Learning and Autonomous Systems

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now