Ridge regression is a particularly effective way of dealing with multicollinearity in multiple regression. Similar to L1 regularization in LASSO regression, Ridge uses an L2 penalty, which penalizes the the model based on the square of the magnitude of the coefficients. This incentivizes models with small coefficients, and has the effect of shrinking the coefficients by the same magnitude. However, unlike LASSO, ridge regression will not shrink coefficients to 0. The bias also has the effect of lowering variance, which makes for better predictive accuracy in the presence of multicollinearity .
A Tuning parameter, \(\lambda\), is used to adjust control the amount penalty used in the model. \(\lambda\) lies between 0 and 1, with 1 being full bias and 0 being no bias. One can proceduraly optimize \(\lambda\) by testing many models \(\lambda\) values and choosing the \(\lambda\) used in the best model.
Hereโs an example of Ridge used in Excel (remember Excel?): https://www.real-statistics.com/multiple-regression/ridge-and-lasso-regression/ridge-regression-example/
If you want to do Ridge (or LASSO) in R or Python, the Glmnet package in R and Sklearn package in Python have great functions for compiling these models.