Skip main navigation

Main types of regularization

In this article, Dr Hao will discuss the major types of regularisations that can overcome the issue of overfitting.

The universe of the regularized linear regression methods is divided into different categories according to the norm of parameters in the penalty term.

Recall that to resolve the overfitting issue, we consider the constraint optimization problem (min_{beta} (Y- X beta)^{T} (Y- Xbeta), text{ s.t. } || beta || leq t.)

By Lagrange multiplier method, it is equivalently to solve the following unconstraint problem by adding the additional penalty term:

[min_{beta} (Y- X beta)^{T} (Y- Xbeta) + lambda ||beta||.]

Depending on the norm used in the penalty term, there are three commonly used regularizations to avoid overfitting issues as follows:

  • Lasso Regression:

[L(beta vert X, Y) = (Y- Xbeta)^{T} (Y- Xbeta) + lambda vertvert beta vertvert_{1};]

  • Ridge Regression:

[L(beta vert X, Y) = (Y- Xbeta)^{T} (Y- Xbeta) + lambda vertvert beta vertvert_{2}^{2};]

  • Elastic Net:

[L(beta vert X, Y) = (Y- Xbeta)^{T} (Y- Xbeta) + lambda left( frac{1-alpha}{2}vertvert beta vertvert _{2}^{2}+ alpha vert vert beta vert vert _{1}right).]

This article is from the free online

An Introduction to Machine Learning in Quantitative Finance

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now