Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only T&Cs apply

Find out more

Exploring the Experimenter

Ian Witten shows how to use the Experimenter to find the performance of classification algorithms on datasets.
10.8
Hello! Welcome back to New Zealand for another few minutes of More Data Mining with Weka. By the way, I’d just like to thank all of those who did the first course for their nice comments and feedback. You know, the University of Waikato is just a little university on the far side of the world, but they listen. They listen when they hear feedback, and they’ve listened to you. As you can see, they’ve put me in a bigger office with more books and bigger plants. This has been great. They really appreciate the positive feedback that we’ve had from you for the previous course. Thank you very much indeed. Today we’re going to look at the Experimenter.
50.1
As you know, there are four [five] interfaces to Weka: the Explorer, which we looked at in the last course; the Experimenter and two [three] more. We’re going to look at the Experimenter today, and in the next lesson as well. It’s used for things like determining the mean and standard deviation performance of a classification algorithm on a dataset, which you did manually, actually, in the previous course. It’s easy to do several algorithms on several datasets, and you can find out whether one classifier is better than another on a particular dataset and whether the difference is statistically significant or not.
83.8
You can check the effect of different parameter settings for an algorithm, and you can actually express the results of these tests as an ARFF file. So you can sort of do data mining on the results of data mining experiments, if you’d like. In the Experimenter, sometimes the computation takes days or even weeks, and it can be distributed over several computers, like all of the computers in a lab.
108.4
When you invoke the Experimenter, you get three panels: the Setup panel, the Run panel, and the Analyse panel. Before we go to those, let me just refresh your memory. This is a slide from Data Mining with Weka, where we talked about the training set and the test set. A basic assumption of machine learning is that these are independent sets produced by independent sampling from an infinite population. We took a dataset, segment-challenge, and a learning algorithm J48, and we used a percentage split method of evaluation. We evaluated it and got a certain figure for the accuracy. Then we repeated that with different random number seeds, and in fact we got ten different figures for the accuracy.
151.5
From those we manually computed the sample mean and the variance, and hence the standard deviation. Also, while we’re at it, let me just remind you about cross-validation. In Data Mining with Weka, we looked at this technique of 10-fold cross-validation, which involved dividing the dataset into ten parts, holding out each part in turn, and averaging the results of the ten runs. Let’s get into the Experimenter. If I just go here and click Experimenter, I get the Setup panel. I’m going to start a new experiment. I’m just going to note that we’ve got 10-fold cross-validation by default, and we’re repeating the experiment ten times by default. I’m going to add a dataset. I’m going to add the segment-challenge dataset, which is here.
203.8
I’m going to add a machine learning algorithm. I’m going to use J48. You’ve seen this kind of menu before many, many times; it’s the same as in the Explorer. If I just select J48 and click OK, then I’ve got this dataset and this learning algorithm. Well, let’s just run it. So off I go to the Run panel and click Start. It’s running. You can see at the bottom here, it’s doing the 5th, 6th, 7th, 8th, 9th, 10th run, because we repeated the whole thing ten times. We repeated 10-fold cross-validation ten times. Now, if I go to the Analyse panel, it doesn’t show anything. I need to analyze the results of the experiment I just did. Click Experiment.
250.4
And I need to perform the test. You can see here that it’s showing for a dataset called “segment” that we’ve got an average of 95.71% correct using this J48 algorithm. We wanted to look at the standard deviation. If I click Show std. deviations, and perform the test again, then I get the standard deviation. We’ve effectively done what we did rather more laboriously in the first course by doing ten individual runs. Over on the slide here, this summarizes what we’ve done. In the Setup panel, we set things up. In the Run panel, we just clicked Start; and in the Analyse panel, we clicked Experiment, and we selected Show std. deviations and performed the test.
296.2
Now, what about those detailed results of the individual runs? I’m going to go back to the Setup panel here. I’m going to write the results to a CSV file. I think I’ll just do a percentage split. I’ll do 90% training, 10% test. I’ve got my dataset and my machine learning method, so I’ll just go and run.
325.9
If I look at the CSV file that’s been produced, well, here it is. We repeated the experiment 10 times. These are the 10 different runs. For each of these 10 runs, we’ve got a lot of information. A lot of information. The information that we’re really looking for here is Percent_correct. That’s the percent correct for each of those 10 separate runs. We’ve got all sorts of other stuff here, including, for example, the user time, the elapsed time, and lots and lots of other things. Maybe you should take a look at those yourself. That’s given us the detailed results for each of the 10 runs. I’m going to do 10-fold cross-validation now.
377.9
These are the 10 repetitions, right, and we did single percentage split. If I do 10-fold cross-validation, and write the result into a file, and run it again. It takes a little bit longer, because it’s doing cross-validation each time. Now it’s finished, and if we look at the resulting file, we get something that’s very similar but much bigger. We repeated the whole thing 10 times. We repeated 10-fold cross-validation 10 times. This is the first run, and there were 10 folds. So there are 10 folds of the first run. Here are the 10 folds of the second run, and so on. I’ve got the same results as I had before along here.
425.5
So I’ve got a very detailed account of what was done in that experiment. Just coming back to the slides here. To get detailed results, we went back to the Setup panel, selected CSV file, and put in a file name for the results. This is the file that we got with percentage split. Then we did the same thing for the cross-validation experiment, and we got a larger results spreadsheet. Let’s just review the Experimenter. We’ve got three panels. In the Setup panel, you can open an experiment, and you can save an experiment, but what we usually do is start a new experiment. We normally start by clicking here. There’s an Advanced mode.
468.5
We’re not going to talk about the Advanced mode here; we’re going to continue to use the simple mode of the Experimenter. You can set a file name for the results if you want, either an ARFF file or a CSV file or, in fact, a database file. You can do either a cross-validation or a percentage split. Actually, you can preserve the order in percentage split. The reason for that is that there’s no way of specifying a separate test file in the Experimenter. To do that, you would kind of glue the training set and test set together and preserve the order and specify the appropriate percentage so that those last instances were used as the test set.
508.4
Normally, if we’re not doing that, we just randomize things for the percentage split. We’ve got the number of repetitions. We repeated the whole thing 10 times, but we could have repeated it 100 times. Here we can add new datasets. We can add more datasets. We can delete datasets that we’ve added, delete this dataset. Here we add more learning algorithms. We can just add new learning algorithms into the learning algorithms list. That’s the Setup panel. Then there’s the Run panel. You don’t do much in the Run panel except click Start, and just monitor for errors here. There were zero errors in the 3 runs I did.
545.3
Then, in the Analyse panel, you can load results from a file or a database, but what we normally want to do is click Experiment here to get the results from the experiment we’ve just done. There are many options, and we’re going to be looking at some of these options as we go through this course. That’s the Experimenter.

Learn how to use Weka’s Experimenter interface. It makes it easy to run different algorithms on different datasets and compare the results – whether one classifier is better than another on a particular dataset and whether the difference is statistically significant or not. The results of these tests can be output as an ARFF file – so you can do data mining on the results of data mining experiments!

This article is from the free online

More Data Mining with Weka

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now