Skip main navigation

£199.99 £139.99 for one year of Unlimited learning. Offer ends on 28 February 2023 at 23:59 (UTC). T&Cs apply

Find out more

Multilayer perceptrons

Multilayer perceptrons are networks of perceptrons. Ian Witten explains how they can implement arbitrary decision boundaries using "hidden layers".
In the last lesson, we looked at the basic Perceptron algorithm, and now we’re going to look at the Multilayer Perceptron. Multilayer Perceptrons are simply networks of Perceptrons, networks of linear classifiers. They have an input layer, some hidden layers perhaps, and an output layer. If we just look at the picture on the lower left, the green nodes are input nodes. This is actually for the numeric weather data. Although you probably can’t read the labels, the top one is “outlook=sunny”; underneath is “outlook=overcast”; then “outlook=rainy”; and then we have “temperature”, “humidity” and “windy” for the nodes. This is the numeric weather data, so “outlook” is the only nominal variable, and that’s been made into three binary attributes.
These two [yellow] nodes are the output nodes for output is “play” and “don’t play”, respectively. Each of those two yellow nodes performs a weighted sum, and each of the connections has a weight. If we look at the more complicated picture to the right, we’ve got some red nodes here. These are three hidden layers with different numbers of neurons/nodes in each of these three hidden layers. Each node performs a weighted sum of its inputs and thresholds the result, just like in the regular, basic Perceptron. But in the basic Perceptron, you looked to see whether the result was greater than zero or less than zero. In Multilayer Perceptrons, instead of using that hard-edged function, people use what’s called a “sigmoid” function.
I’ve drawn a few sigmoid functions on the slide up in the top right. You can see that as they become more extreme, they approach the step function, which corresponds to the hard-edged threshold used in the basic Perceptron. But here we’re going to use a smooth, continuous sigmoid function. Actually, there is a theoretical property that the network will converge if the sigmoid function is differentiable. That’s kind of important. Anyway, that’s by the by. These nodes are often called “neurons”, the red nodes and the yellow nodes. These are not to be confused with the neurons that you have in your head. The big questions are how many layers, and how many nodes in each?
We know for the input layer, we’re going to have one for each attribute, and the attributes are numeric or binary. For the output layer, we’re going to have one for each class. How many hidden layers? Well, that’s up to you. If you have zero hidden layers, that’s the standard Perceptron algorithm. That’s suitable if the data is linearly separable.
There are theoretical results: with one hidden layer, that’s suitable for a single, convex region of the decision space; two hidden layers are enough to generate arbitrary decision boundaries. However, people don’t necessarily use two hidden layers, because that really increases the number of connections – that’s the number of weights that would have to be learned.
The next big question is: how big should the layers be? They are usually chosen somewhere between the input and output layers. A common heuristic, Weka’s heuristic, is to use the mean value of the [number of] input and output layers. What are these weights? Well, they’re learned. They’re learned from the training set. They are learned by iteratively minimizing the error using the steepest descent method, and the gradient is determined using a backpropagation algorithm. We’re not going to talk about the backpropagation here. The change in weight is computed by multiplying the gradient by a constant called the “learning rate” and adding the previous change in weight multiplied by another parameter called “momentum”.
So Wnext (the next weight vector) is W + ΔW, where ΔW is minus the learning_rate times the gradient (minus because we want to go downhill) plus momentum times the previous change in the weight parameter. Multilayer Perceptrons can get excellent results, but they often involve a lot of experimentation with the number and size of the hidden layers and the value of the learning rate and momentum parameters. Let’s take a look in Weka. I’m going to use the numeric weather data. Over here, I’ve got it open. I’m going to go to Classify and find MultilayerPerceptron in the functions category. Here it is, and let’s just run it. We get 79%. I want to show you the network we used.
Let me just switch on GUI, the graphical user interface. Now when I run it, I get a picture of the network. That is Weka’s default network. These are the input nodes that we looked at before, the green ones. Weka has chosen 4 neurons in the hidden layer. That’s the average of the number of input and output layers. There are 2 output neurons.
Going back to the slide: when I tried IBk, I also get 79% on this data set. J48 and so on do worse. However, it’s just a toy problem, so those results aren’t really indicative. On real problems Multilayer Perceptrons often do quite well, but they’re slow.
There are a number of parameters: the number of hidden layers and the size of the hidden layers; the learning rate and momentum. The algorithm makes multiple passes through the data, and training continues until the error on the validation set consistently increases – that is, if we start going uphill – or the training time is exceeded, the maximum number of epochs allowed. Going back to Weka, I’m going to configure this to use 5 neurons, 10 neurons, and 20 neurons in 3 hidden layers. Look at this! You can see the three hidden layers with 5, 10, and 20 neurons – an awful lot of weights here. We’ve got the learning rate, so we can change the momentum.
We’ve got the maximum number of epochs. We can just run that. Also, in Weka, you can create your own network structure. You can add new nodes, add connections, and delete nodes and so on. I’m going to go back to Weka, and I’m just going to use the default number of hidden layers. I’ve now got my 4 neurons in the 1 hidden layer. I’m going to add another hidden layer. If I click empty space, I create a neuron. It’s yellow, which means it’s selected. I’m going to deselect it by clicking empty space, and create another couple. With this one here, I’m going to connect it up to this.
If I click these, they connect the selected neuron – that is, the yellow one – to the one I click. Then I can deselect it and select this one and make connections here. You can see it’s pretty quick to add connections. I’ve added another hidden layer. Well, I need to do some things with the output here, but you can get the idea from this. We can click to select a node and right-click an empty space to deselect. We can create and delete nodes by clicking in empty space to create and right-clicking to delete. We can create and delete connections, and we can set parameters in this interface too. Are they any good?
Well, I tried the Experimenter with 6 datasets, and I used 9 algorithms. MultilayerPerceptron gave me the best results on 2 of the 6 datasets.
The other wins were: SMO won on another 2 datasets; J48 and IBk won on 1 dataset each. When I say “win”, I mean beat all the other methods. MultilayerPerceptron was not too bad, but in fact it was between 10 and 2000 times slower than other methods, which is a bit of a disadvantage. Here’s the summary. Multilayer Perceptrons can implement arbitrary decision boundaries given two or more hidden layers, providing you’ve got enough neurons in the hidden layers, and providing they’re trained properly. Training is done by backpropagation, which is an iterative algorithm based on gradient descent. In practice, you get quite good performance, but Multilayer Perceptrons are extremely slow. I’m still not a fan of Multilayer Perceptrons; I’m sorry about that.
They might be a lot more impressive on more complex datasets; I don’t know. But for me, configuring Multilayer Perceptrons involves too much messing around.

Multilayer perceptrons are networks of perceptrons, networks of linear classifiers. In fact, they can implement arbitrary decision boundaries using “hidden layers”. Weka has a graphical interface that lets you create your own network structure with as many perceptrons and connections as you like. A quick test showed that a multilayer perceptron with one hidden layer gave better results than other methods on two out of six data sets – not too bad. But it was 10–2000 times slower than other methods, which is a bit of a disadvantage.

This article is from the free online

More Data Mining with Weka

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education