# Main types of regularization

In this article, Dr Hao will discuss the major types of regularisations that can overcome the issue of overfitting.

The universe of the regularized linear regression methods is divided into different categories according to the norm of parameters in the penalty term.

Recall that to resolve the overfitting issue, we consider the constraint optimization problem (min_{beta} (Y- X beta)^{T} (Y- Xbeta), text{ s.t. } || beta || leq t.)

By Lagrange multiplier method, it is equivalently to solve the following unconstraint problem by adding the additional penalty term:

[min_{beta} (Y- X beta)^{T} (Y- Xbeta) + lambda ||beta||.]

Depending on the norm used in the penalty term, there are three commonly used regularizations to avoid overfitting issues as follows:

• Lasso Regression:

[L(beta vert X, Y) = (Y- Xbeta)^{T} (Y- Xbeta) + lambda vertvert beta vertvert_{1};]

• Ridge Regression:

[L(beta vert X, Y) = (Y- Xbeta)^{T} (Y- Xbeta) + lambda vertvert beta vertvert_{2}^{2};]

• Elastic Net:

[L(beta vert X, Y) = (Y- Xbeta)^{T} (Y- Xbeta) + lambda left( frac{1-alpha}{2}vertvert beta vertvert _{2}^{2}+ alpha vert vert beta vert vert _{1}right).]