Want to keep learning?

This content is taken from the Taipei Medical University's online course, Applications of AI Technology. Join the course to learn more.

Skip to 0 minutes and 14 secondsConvolutional Neural Networks is a fundamental neural network architecture, and becomes more and more important in modern deep learning. In this class I am gonna talk about several important CNN architectures and current development in this field. The first CNN is proposed by Yann LeCun back to 1990s. The neural network shown here is LeNet-5, which has 2 convolution, 2 subsampling, 2 fully-connected layers and 1 output layer. The later CNN architectures are largely based on this architecture, with different subsampling strategies, activation functions, or neuron connections. LeCun created a hand-written digits dataset as a benchmark to evaluate the performance of his Convolutional Neural Network.

Skip to 1 minute and 4 secondsThe benchmark is called MNIST MNIST has become the “Hello World” of deep learning now, almost all deep learning tutorials use MNSIT as first example to show how to build and train CNN networks. Let’s see an example of recognizing digits using CNN. Here is a figure from Francois Chollet’s book “Deep Learning with Python.” Mr. Chollet is the author of Keras. This example shows the filter responses of the input image. In a CNN network, the filters of the first layer will capture the basic shapes of digits like lines or corners. The filters of the second layer will capture more complicated and abstract features.

Skip to 1 minute and 54 secondsIn short, the filters of first several layers extract the basic image structures, while the filters of deeper layers capture high-level features. Before talking about more CNN architectures, let me introduce ImageNet data set first, because ImageNet played an important role in CNN history. ImageNet is large-scale image database created by Prof. Fei-Fei Li and her group. There are 20,000 categories and 14 million hand-annotated images in this database. Fei-Fei selected 1000 categories from ImageNet and held the ImageNet Large Scale Visual Object Recognition Challenge (ILSVRC) in 2010. The participating teams would evaluate their algorithms on the given dataset, and compete to achieve higher accuracy on several visual recognition tasks. ILSVRC stimulated the advancement of technology.

Skip to 2 minutes and 57 secondsIn 2012, the students of Hinton joined the challenge and proposed a CNN, which is known as AlexNet now. AlexNet won the challenge with large margin, and spur the deep learning boom. There are the error rates of previous ILSVRC winners. The Y-axis unit is error rate, so the lower the better. The winners of 2010 and 2011 use algorithms based on Support Vector Machine (SVM), which is considered as shallow in contrast to deep learning. We can see that AlexNet was around 38% better than previous shallow algorithms. In 2015, the ResNet proposed by Microsoft Asia exceeded human-level accuracy. Here is the architecture of AlexNet, which consists of 5 convolutional layers and 3 fully-connected layers. AlexNet has has 60 million parameters and 650,000 neurons.

Skip to 4 minutes and 10 secondsThe team proposed a simple but effective method called “dropout” to prevent overfitting. Dropout forces a network to be more generalized by randomly dropping connections between specific layers The model was trained on two GPUs because the RAM of GPUs at that time were not large enough to keep all the model parameters in memory.

Convolutional Neural Networks (CNNs)

So, why should a data scientist learn deep learning algorithms now? What do neural networks offer that traditional machine learning algorithms don’t?

The different types of neural networks in deep learning, such as convolutional neural networks (CNN), recurrent neural networks (RNN), artificial neural networks (ANN), etc. are changing the way we interact with the world. We have heard on ANN already on the first week. Next, we will see CNN and RNN in this activity.

These different types of neural networks are at the core of the deep learning revolution. Famous applications like unmanned aerial vehicles, self-driving cars, speech recognition, etc.

Prof. Lai will explain Convolutional Neural Networks(CNN) first. Convolutional Neural Networks is a fundamental neural network architecture and becomes more and more important in modern deep learning. He will talk about several important CNN architectures and current development in this field. Then, Prof. Lai will introduce ImageNet played an important role in CNN history.

Share this video:

This video is from the free online course:

Applications of AI Technology

Taipei Medical University