Skip to 0 minutes and 1 secondWhat we're going to do here now is look at a couple of examples of working with learning graphs. Now, as I discussed in the article, learning graphs are quite fiddly to work with. So what we'll do before we run into the exercises, is we'll just have a look at an idealised learning graph and how to interpret it. And then we will go through these two learning graph exercises. Again, you can try to replicate the code. I'll go through the code line by line, explaining what's being done.

Skip to 0 minutes and 35 secondsSo, let's generate a nice looking synthetic learning graph. As explained in the article, what we're seeing here is a sequence of models created by-- starting off, the first in the sequence, very little training data, then models being generated by more and more training data. The blue line is their in-sample error, the red line, their out-of-sample error. Now this is what you'd like to see from a learning graph. We have here in this dashed yellow, we see the room for improvement, the distance between the two lines. And this black horizontal line is the approximate expected performance with infinite data. We don't really know where it's going to be, but it's probably going to be about midway between the two lines.

Skip to 1 minute and 31 secondsNow, we can also take the slope of the lines. And for example, the slope of the red line at the full amount of training data is negative 0.02. So if we were to add an additional 100 instances of training data, this learning graph is saying we can probably expect to see a reduction in means squared error of about 2. Unfortunately though, learning graphs can often be much, much more difficult to interpret than this lovely idealised one. They can in fact jump around all over the place. The red line and the blue line can cross. We can get things like this. So this is a bad learning graph. But it's also the sort of thing you're very, very likely to see.

Skip to 2 minutes and 19 secondsIt's a very noisy, difficult to interpret learning graph. The in-sample and out-of-sample error lines are not smooth. They're jumping around. They're even criss-crossing over each other. Now, it's very difficult to make much out of a learning graph that looks like this. Fortunately though, they typically become much easier to understand if you smooth them. Smoothing simply means taking the average of n consecutive data items.

Skip to 2 minutes and 53 secondsSo if we've built a whole sequence of models, built up from different amounts of training data along here, instead of plotting a point for each one of those models, we'll plot the average of the first six, then the average of models two to seven, the average of models three to eight, the average of models four to nine, et cetera. And to get some idea of what this does to a learning graph, it goes from this sort of unintelligible thing to something much nicer. And here, we can really hope to interpret what's going on. Yes, it's still a bit odd. The in-sample error peaked and then starts to go down. But it is something we can work with.

Skip to 3 minutes and 36 secondsWe can see that there was a significant reduction in the distance of the red line and the blue line at about this size. And from there on, they stay approximately the same amount apart, but they are sloping downwards a little bit. So if you were thinking about making a model with only 10,000 rows of training data, this learning graph would say, no, it would probably be much better to increase the amount of training data, but possibly the amount of improvement you're going to get after about 14,500 rows is much smaller. And so you might consider, OK, I'll take 14,500 rows of training data rather than 10,000.

Skip to 4 minutes and 24 secondsBut the important thing is when you see a very difficult to understand, unintelligible learning graph, always try to smooth it before you give up on obtaining information from it. Because typically, they can be massaged into a nice graph that you can get information from. Now, when we look at these next two exercises, there's a couple of points I'm going to make that you can think about when we're looking at them. Exercise one, when you look at the picture, you might see, and this depends a little bit on randomness involved in selecting the training data, that the in-sample is not monotonically increasing, but rather decreasing.

Skip to 5 minutes and 10 secondsWell, as I said, as I've emphasised in this little clip here, real-life learning graphs are noisy and a long way from the theoretical idea. And this is just an example of that. Exercise two, rather than having a point, I've got a couple of questions that you might want to think about. Firstly, which models appear to have significant room for improvement? Which models look to have better optimal performance? Which models would you keep working with? So those are three questions that you can think about while we're doing exercise two, and then we'll have a quiz on that after the exercise.

Learning Graphs

A discussion on working with learning graphs. We discuss an idealized learning graph and what information we can obtain from it. We then see how difficult to interpret learning graphs can be, and how we can try to make such cases usable via smoothing. We also introduce points and questions related to the following two exercises on learning graphs. The associated code is found in the Tidbits Learning Graphs.R file.

Share this video:

This video is from the free online course:

Advanced Machine Learning

The Open University