Skip to 0 minutes and 1 secondRight. So now we're going to look at some example exercises for neural networks. Now, what we're going to be doing is we're going to be looking in this course at very simple neural networks, neural networks with a single hidden layer. We're not going to be looking at deep learning neural networks. But we are, in the articles, giving you all the theory that you need to know to understand deep learning in detail. Now, the implementation that we're going to be using in these exercises is a very simple neural network library called NNet. The function that we're going to be using-- let's have a look at this now in the code. The model constructor is the function NNet in the library NNet.
Skip to 0 minutes and 46 secondsAnd we're going to be using a series of models here. The formula gives us the target variable, the variable we want to estimate. And then we would give the feature variables, the variables that we will use to estimate the target variable. Here we've only got one, wind. But we could have others, wind plus rainfall, for example. But we just have a single input feature in this example. We give the dataset where we can find these variables, the variable ozone and wind. Size specifies the number of hidden nodes. Now, there's only a single hidden layer. This simple neural network implementation only gives you the option of having a single hidden layer.
Skip to 1 minute and 35 secondsSo the only thing you can specify is how many nodes are in it. And because we're going to be doing regression problems, we would do lin out equals true. When we do classification problems, we would do lin out equals false.
Skip to 1 minute and 55 secondsSo let's run this.
Skip to 2 minutes and 5 secondsHere we go. This is the sort of output we're going to see. This is, of course, the gradient descent optimization problem being solved, trying to find the best values for the weight in the neural network to minimise the loss function. And you see that the optimization, the gradient descent converged to a local optima after 40-something iterations. The predict function for the neural network implementation here, it's just the basic predict. You give the neural network model. And then you give the data that you want to use. And it, of course, needs to have colons that correspond to the feature variables that you specified in the formula when creating the model, wind, wind. And in this case, we've built our model.
Skip to 3 minutes and 0 secondsWe can now see what it would do. It says, OK, if we had a wind of one, then this model expects we would have an ozone amount of 121. Wind level of 10, ozone level of 31. Wind level seven, ozone level 54. Wind level 16, ozone level 22.
Skip to 3 minutes and 22 secondsNow, we know that when we start the training algorithm for neural networks, there's an element of nondeterminism. The initial weight values are randomly specified. So if you run the same function twice to create a neural network, say we run this again, we actually may well come up with a slightly different model, because our initial weights would have been different. They would have been randomly generated to be small values close to zero. But they would have been different. And they would have given us a different starting position on the loss surface that would then be optimised by the gradient descent algorithm. So if we run this again, we see we get slightly different values.
Skip to 4 minutes and 22 secondsLast time, for example, for wind value of 16, we got an estimate of 22 ozone. Now we get an estimate of 23.
Skip to 4 minutes and 36 secondsFor a bit of fun, we can actually replicate the entire inner workings of such a simple neural network. The activation function in the NNnet package is the sigmoid function. So we'll just create that function. We'd have to have a look at the model to find the weights that were found. Here we go. These are the model weights. This is the entire summary output. If you were to read the documentation of the NNet package, you would be able to understand what this means. But these are the weight values.
Skip to 5 minutes and 11 secondsAnd so I could actually reproduce entirely what the neural network is doing just using manual sigmoid functions and the weight values of the network we trained. So we see my manual implementation. Wind value of 17 gives an estimate 23.08. Likewise, if we were to use our model with a wind input of 17, we'd get exactly the same answer. So it's possible to entirely replicate what's going on with simple sigmoid functions and the same weights, lovely. But of course, the hard part is not doing that. The hard part is training the weights, learning the weights from the training data using gradient descent, using back propagation to calculate all the partial derivatives.
Skip to 6 minutes and 0 secondsAnd while I could do that, that would take a lot longer than the few seconds that replicating the output did. So we're not going to do it. Let's move on to the actual sample exercises for artificial neural networks.
Simple ANNs in R
An overview of using simple ANNs in R as an introduction for the following two exercises. The associated code is in the Tidbits Ann.R file.
We will be using the simple nnet package, which allows us to easily create shallow neural network models (where shallow means with a single hidden layer). We look the basic syntax for creating such models, and using them to predict values of target variables for new data. The process involved is discussed so as to tie what we are seeing with the theory we have covered.
For fun we also replicate the inner workings of the learnt model using manual sigmoid functions and linear regression, and the fitted weights.
Note that the nnet R package is used in this exercise. You will need to have it installed on your system. You can install packages using the install.packages function in R.
© Dr Michael Ashcroft