Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.

Summary for Week One

So we’ve come to the end of the first week of Advanced Machine Learning! We hope you are enjoying it.

Besides welcoming you to the course, this week has concentrated on background concepts and theory that we will need as we proceed to look at particular algorithms and methodologies in the remainder of the course. Here is a reminder of what you should have understood from this weeks materials:

1. The Data Science Workflow

You should know what the basic data science workflow looks like, and what steps are the core responsibilities of a data scientist.

2. Types of Learning

You should know of the four major types of learning - supervised, unsupervised, semi-supervised and reinforcement learning. You should know the problems each type of learning seeks to solve and the data that they use to solve it. We will, of course, be looking at techniques from these different types of learning in most of the remainder of the course.

3. Linearity, Non-Linearity and Feature Transformations

You should know the difference between linear and non-linear models, and understand the basic recipe of using non-linear feature transformations to turn a linear modeling approach into a non-linear one.

4. Bias-Variance

You should understand the idea of expected loss, and the bias-variance-irreducible error decomposition of expected loss. You should understand the relationship between model complexity and bias/variance.

5. Training as Optimization

You should understand what statistical models are, and how training (or fitting) their parameters given training data can be understood as the optimization of a loss function.

6. Regularization

You should understand both that regularization provides a means of controlling the complexity of a statistical model, and how the paradigmatic L1 and L2 regularization techniques perform this.

Armed with this knowledge, we are now ready to get into the algorithms and techniques that allow us to actually perform machine learning!

See you next week….

Share this article:

This article is from the free online course:

Advanced Machine Learning

The Open University

Contact FutureLearn for Support