Summary for Week One
So we’ve come to the end of the first week of Advanced Machine Learning! We hope you are enjoying it.
Besides welcoming you to the course, this week has concentrated on background concepts and theory that we will need as we proceed to look at particular algorithms and methodologies in the remainder of the course. Here is a reminder of what you should have understood from this weeks materials:
1. The Data Science Workflow
You should know what the basic data science workflow looks like, and what steps are the core responsibilities of a data scientist.
2. Types of Learning
You should know of the four major types of learning - supervised, unsupervised, semi-supervised and reinforcement learning. You should know the problems each type of learning seeks to solve and the data that they use to solve it. We will, of course, be looking at techniques from these different types of learning in most of the remainder of the course.
3. Linearity, Non-Linearity and Feature Transformations
You should know the difference between linear and non-linear models, and understand the basic recipe of using non-linear feature transformations to turn a linear modeling approach into a non-linear one.
You should understand the idea of expected loss, and the bias-variance-irreducible error decomposition of expected loss. You should understand the relationship between model complexity and bias/variance.
5. Training as Optimization
You should understand what statistical models are, and how training (or fitting) their parameters given training data can be understood as the optimization of a loss function.
You should understand both that regularization provides a means of controlling the complexity of a statistical model, and how the paradigmatic L1 and L2 regularization techniques perform this.
Armed with this knowledge, we are now ready to get into the algorithms and techniques that allow us to actually perform machine learning!
See you next week….
© Dr Michael Ashcroft