Skip main navigation

New lower prices! Get up to 50% off 1000s of courses. 

Explore courses

The deep learning renaissance

Ian Witten discusses striking recent success of learning using deep networks of multilayer perceptrons for various large-scale learning tasks.

In the last couple of videos I’ve been a bit negative about perceptrons and multilayer perceptrons – and the preceding quiz hasn’t exactly made a good case for them!

But …

In recent years, so-called “deep learning” approaches to machine learning have been successfully applied to computer vision, natural language processing, and speech recognition. Deep learning involves multilayer networks with many layers. Ideally, the lower layers learn about low-level features (like lines and edges in an image), intermediate layers learn how to put them together (like arms and legs, noses and tails, wheels and faces), and upper layers learn how to combine these into objects (is it a car? a cat? a cow?). The key is to use “high-capacity” models – that is, ones with many parameters – with several layers. Multilayer neural networks are a prime example.

A critical ingredient is the use of overwhelmingly larger quantities of data than has heretofore been possible. In former times, data mining researchers paid scant attention to images and speech signals. Of course, the world abounds with signal data – but it is generally unlabeled. Recently, large collections of labeled data have been created, stimulating the application of deep learning techniques to tasks such as image recognition, face verification, speech and language models.

Images

The Large-scale visual recognition challenge is to classify images obtained from sources such as Flickr into 1000 possible object category classes. The training set contains 1.2M images, which were hand-labeled(!) based on the presence or absence of an object belonging to these categories. A random subset of 50,000 images (50 per class) was used as the validation set and 100,000 images (100 per class) as the test set. The test images are hidden to participants in the challenge and available only to the judges.

The percentage of times the target label is not one of the 5 highest-probability predictions is called the “top-5 error”. Classical machine learning methods struggle to get below 25% – far worse than human performance, which has been measured at 5.1% for the same test set. The 2015 challenge was won by a team from Microsoft Research Asia who achieved 3.6%, substantially outperforming humans. They used a multilayer architecture with 152 layers! (Don’t try setting this up in Weka.)

Faces

An important special case of object recognition, face recognition has been the subject of intense research for decades—and deep learning networks have transformed the field. If you have used photo apps that identify faces, you have probably been surprised (dismayed?) at how good they are. Controlled experiments on face verification, where the question is whether two mugshots belong to the same person or not, have yielded better-than-human performance.

More data gives even better results. Every time one of its 2 billion users uploads a photo to Facebook and tags someone, that person is helping the facial recognition algorithm – which, according to the company, is able to accurately identify a person 98% of the time. (According to the FBI, its own facial recognition technology identifies the correct person in a list of the top 50 people only 85% of the time.)

It is worth noting that this raises controversial ethical issues (apart from the fact that every user works for Facebook, for free). Federal governments deploy face verification technology in the fight against international terrorism; airports use it to reduce lineups at immigration. The application of face recognition in widespread video surveillance has a profound effect on the balance between security and privacy, and other civil liberties. At the individual level, stalkers exploit end-user web services for face recognition.

Language

Researchers at Google create language models based on single-hidden-layer networks trained with vast amounts of data – 30 billion words. One model trains a neural network to predict upcoming words given their context. Another predicts nearby words within a certain distance before and after the source word. Here the number of “classes” equals the vocabulary size, which ranges from 10^5 to 10^9 terms, so the output is decomposed into a binary tree – for a V-word vocabulary it is then necessary to evaluate only log2(V) rather than V output nodes.

Many research groups are mining massive quantities of text data in order to learn as much as possible from scratch, replacing features that have previously been hand engineered by ones that are learned automatically. Large neural networks are being applied to tasks ranging from sentiment classification and translation to dialog and question answering. Google uses deep learning techniques to learn how to translate languages from scratch, based on voluminous data.

Deep learning with Weka

It’s not feasible to do these things with Weka’s multilayer perceptron classifier. But there is

… good news ☺

The wekaDeeplearning4j package provides Weka classifiers and filters that invoke the popular Deeplearning4J Java library. It allows you to do advanced image recognition with deep networks involving “convolutional” layers, and experiment with language models like those mentioned above. It even allows you to deploy your network on specialist hardware, namely graphics processing units (GPUs), which can yield execution speeds that are orders of magnitude faster than standard implementations.

… and bad news ☹

Although no programming is needed to use the classifiers and filters, they have many configuration options. You need some understanding of deep learning, which involves specialist (difficult!) mathematics, and a great deal of trial and error in order to get good results. To introduce the subject of deep learning and show how to do it with Weka would take a whole course. Deep learning with Weka? Maybe.

Want to learn more?

I hate to self-advertise, but a recent book by myself and friends includes copious material on deep learning. Although pretty mathematical, by the standards of the field it’s a fairly accessible explanation of this area. Data mining: Practical machine learning tools and techniques (4th edition), by Ian Witten, Eibe Frank, Mark Hall, and Christopher Pal. (Click here to order from Amazon.)

Note: this book is not required reading for this course.

This article is from the free online

More Data Mining with Weka

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now