I’m sorry, but you can’t really use data mining to foretell the future. ☹️
What we’re going to look at is a particular kind of data called “time series data”. This involves an attribute or attributes, typically numeric, whose value evolves over time. Each instance has a time-stamp – which may or may not be spaced at regular intervals.
We often use linear regression to extrapolate the attribute’s value to future instances. You’ll be asking, isn’t that a bit mundane? – straight-line extrapolation. And you’ll probably be surprised (as I was) to learn that there are simple ways of augmenting the dataset to allow it to model cyclic phenomena using linear regression. Augmenting datasets manually is a frustrating experience, and we’ll learn how to use Weka’s time series forecasting package, which automates this and other related functions. During the week you’ll analyze historical airline passenger data, and wine sales. (Unfortunately you do not get to drink the wine.)
This week our end-of-week example of a data mining application is about inferring properties of soil samples from infrared data. We’ll also look at some general challenges for data mining applications.
At the end of this week you will be able to explain the role of “lagged variables” in time-series analysis. You’ll be experienced in the use of Weka’s time series forecasting package, and be able to work with data that varies on an hourly, daily, weekly, monthly, and yearly basis. You’ll understand that the standard holdout and cross-validation methods simply do not work for time series, and know what to do about it. And you’ll be able to explain what “overlay data” is and how valuable it can be. Oh yes, and you’ll have even more experience of that perennial problem of overfitting
, and how to detect it.