Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only T&Cs apply

Find out more

Deep Learning Basics: Part1

v
14.5
Deep learning basics Today we can hear deep learning everywhere. Now let’s see what is deep learning. Deep learning is based on artificial neural networks, which is inspired by biological neural networks. Here is a picture of a neuron. A neuron is a cell that carries electrical impulses.
35.6
Each neuron consists of three parts: a cell body, dendrites and a single axon. Dendrites are the branches of neurons that receive signals from other neurons and pass the signals into the cell body. Axon can be over a meter long in humans and pass electrical signal to other neurons. If the signals received are strong enough and reach an action potential, the neuron will be activated. Inspired by the biological neuron, Frank Rosenblatt has developed the first prototype of neuron called Perceptron in 1957. Perceptron uses weighted sum to represent Dendrites and a threshold to control the action potential. There were no computers these days, Dr. Rosenblatt designed a hardware device to implement the function.
89.2
Although the idea of perceptron is very similar to the neurons in today’s deep learning, Rosenblatt didn’t develop a mechanism to train multi-layer neural networks. In 1969, Marvin Minsky, founder of the MIT AI Lab, has published a book called perceptrons and concluded that neural networks are dead. He argued that perception cannot be used to learn a simple Boolean function XOR, because XOR is not linearly separable. This publication has caused the first AI winter, and neural networks has widely been rejected by major machine learning conferences. The winter for neural networks has been continued for more than a decade. The hero came to rescue is Geoffrey Hinton, who showed that the XOR can be learnt by using multi-layer perceptrons using backpropagation.
148
Although the idea has been conceived by other researchers before, it is Hinton’s paper that clearly addressed the problems proposed by Minsky. How backpropagation works? Let me first introduce how neural network works. As we know,
165.4
there are two stages in machine learning algorithms: training and inference. For inference, we make predictions by calculating parameters starting from the input layer. This process is called the forward pass. The predicted output is compared with the label to calculate the error. The we propagate the error back to the neurons and adjust the weights. This process is called backward pass. So what is the magical math formula used for backpropagation?
201.6
It turns out the age-old calculus: the chain rule. Of course, to run backpropagation on modern neural networks requires advanced calculus skills like building computational graphs. Fortunately, the open-sourced deep learning framework like Caffe, TensorFlow, PyTorch, CNTK will do the works for us. We don’t need to worry about the details. Gradient descent is the most used learning algorithm. It is a first-order iterative optimization algorithm for minimizing a function. To find a local minimum of the loss function, we can take steps proportional to the negative of the gradient at the current point. The procedure is similar to finding the deepest point in a valley.
254
On the other hand, the algorithm to find a local maximum of that function using positive gradient is called gradient ascent. In this example, the cost function is simple and the global minimum can be easily found. This is also called a convex function. However, in complex high-dimension vector space, it is not guaranteed to find the local minimum. The good news is that, researcher found that there are many local minimums that almost as good as global minimum, so we are not necessary to search for the global minimum.

In this video, Prof. Lai explains the concept of Deep learning.

Deep learning is based on an artificial neural network, which is inspired by biological neural networks. Prof. Lai first introduced the historical background of deep learning.

Then he will explain certain topic: How backpropagation works? And what is Chain rules? What is Gradient Descent?

This article is from the free online

Applications of AI Technology

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now