What is regularization?

16 November 2023

Authors

Jacob Murel Ph.D.

Senior Technical Content Creator

Eda Kavlakoglu

Program Manager

What is regularization?

Regularization is a set of methods for reducing overfitting in machine learning models. Typically, regularization trades a marginal decrease in training accuracy for an increase in generalizability.

Regularization encompasses a range of techniques to correct for overfitting in machine learning models. As such, regularization is a method for increasing a model’s generalizability—that is, it’s ability to produce accurate predictions on new datasets.1 Regularization provides this increased generalizability at the sake of increased training error. In other words, regularization methods typically lead to less accurate predictions on training data but more accurate predictions on test data.

Regularization differs from optimization. Essentially, the former increases model generalizability while the latter increases model training accuracy. Both are important concepts in machine learning and data science.

There are many forms of regularization. Anything in the way of a complete guide requires a much longer book-length treatment. Nevertheless, this article provides an overview of the theory necessary to understand regularization’s purpose in machine learning as well as a survey of several popular regularization techniques.

Bias-variance tradeoff

This concession of increased training error for decreased testing error is known as bias-variance trade-off. Bias-variance tradeoff is a well-known problem in machine learning. It’s necessary to first define “bias” and “variance.” To put it briefly:

- Bias measures the average difference between predicted values and true values. As bias increases, a model predicts less accurately on a training dataset. High bias refers to high error in training.

- Variance measures the difference between predictions across various realizations of a given model. As variance increases, a model predicts less accurately on unseen data. High variance refers to high error during testing and validation.

Bias and variance thus inversely represent model accuracy on training and test sets respectively.2 Obviously, developers aim to reduce both model bias and variance. Simultaneous reduction in both is not always possible, resulting in the need for regularization. Regularization decreases model variance at the cost of increased bias.

Regression model fits

By increasing bias and decreasing variance, regularization resolves model overfitting. Overfitting occurs when error on training data decreases while error on testing data ceases decreasing or begins increasing.3 In other words, overfitting describes models with low bias and high variance. However, if regularization introduces too much bias, then a model will underfit.

Despite its name, underfitting does not denote overfitting’s opposite. Rather underfitting describes models characterized by high bias and high variance. An underfitted model produces unsatisfactorily erroneous predictions during training and testing. This often results from insufficient training data or parameters.

Regularization, however, can potentially lead to model underfitting as well. If too much bias is introduced through regularization, model variance can cease to decrease and even increase. Regularization may have this effect particularly on simple models, i.e. models with few parameters. In determining the type and degree of regularization to implement, then, one must consider a model’s complexity, dataset, and so forth.4

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Types of regularization with linear models

Linear regression and logistic regression are both predictive models underpinning machine learning. Linear regression (or ordinary least squares) aims to measure and predict the impact of one or more predictors on a given output by finding the best fitting line through provided data points (i.e. training data). Logistic regression aims to determine the class probabilities of by way of a binary output given a range of predictors. In other words, linear regression makes continuous quantitative predictions while logistic regression produces discrete categorical predictions.5

Of course, as the number of predictors increase in either regression model, the input-output relationship is not always straightforward and requires manipulation of the regression formula. Enter regularization. There are three main forms of regularization for regression models. Note that this list is only a brief survey. Application of these regularization techniques in either linear or logistic regression varies minutely.

- Lasso regression (or L1 regularization) is a regularization technique that penalizes high-value, correlated coefficients. It introduces a regularization term (also called, penalty term) into the model’s sum of squared errors (SSE) loss function. This penalty term is the absolute value of the sum of coefficients. Controlled in turn by the hyperparameter lambda (λ), it reduces select feature weights to zero. Lasso regression thereby removes multicollinear features from the model altogether.

- Ridge regression (or L2 regularization) is regularization technique that similarly penalizes high-value coefficients by introducing a penalty term in the SSE loss function. It differs from lasso regression however. First, the penalty term in ridge regression is the squared sum of coefficients rather than the absolute value of coefficients. Second, ridge regression does not enact feature selection. While lasso regression’s penalty term can remove features from the model by shrinking coefficient values to zero, ridge regression only shrinks feature weights towards zero but never to zero.

- Elastic net regularization essentially combines both ridge and lasso regression but inserting both the L1 and L2 penalty terms into the SSE loss function. L2 and L1 derive their penalty term value, respectively, by squaring or taking the absolute value of the sum of the feature weights. Elastic net inserts both of these penalty values into the cost function (SSE) equation. In this way, elastic net addresses multicollinearity while also enabling feature selection.6

In statistics, these methods are also dubbed “coefficient shrinkage,” as they shrink predictor coefficient values in the predictive model. In all three techniques, the strength of the penalty term is controlled by lambda, which can be calculated using various cross-validation techniques.

Mixture of Experts | Podcast

Decoding AI: Weekly News Roundup

Join our world-class panel of engineers, researchers, product leaders and more as they cut through the AI noise to bring you the latest in AI news and insights.

Types of regularization in machine learning

Dataset

Data augmentation is a regularization technique that modifies model training data. It expands the size of the training set by creating artificial data samples derived from pre-existing training data. Adding more samples to the training set, particularly of instances rare in real world data, exposes a model to a greater quantity and diversity of data from which it learns. Machine learning research has recently explored data augmentation for classifiers, particularly as a means of resolving imbalanced datasets.7 Data augmentation differs from synthetic data however. The latter involves creating new, artificial data while the former produces modified duplicates of preexisting data to diversify and enlarge the dataset.

Model training

Early stopping is perhaps the most readily implemented regularization technique. In short, it limits the number of iterations during model training. Here, a model continuously passes through the training data, stopping once there is no improvement (and perhaps even deterioration) in training and validation accuracy. The goal is to train a model until it has reached the lowest possible training error preceding a plateau or increase in validation error.8

Many machine learning Python packages provide a training command options for early stopping. In fact, in some, early stopping is a default training setting.

Neural networks

Neural networks are complex machine learning models that drive many artificial intelligence applications and services. Neural networks are composed of an input layer, one or more hidden layers, and an output layer, each layer in turn comprised of several nodes.

Dropout regularizes neural networks by randomly dropping out nodes, along with their input and output connections, from the network during training (Fig. 3). Dropout trains several variations of a fixed-sized architecture, with each variation having different randomized nodes left out of the architecture. A single neural net without dropout is used for testing, employing an approximate averaging method derived from the randomly modified training architectures. In this way, dropout approximates training large a quantity of neural networks with a multitude of diversified architectures.9

Weight decay is another form of regularization used for deep neural networks. It reduces the sum of squared network weights by way of a regularization parameter, much like L2 regularization in linear models.10 But when employed in neural networks, this reduction has an effect similar to L1 regularization: select neuron weights decrease to zero.11 This effectively removes nodes from the network, reducing network complexity through sparsity.12

Weight decay may appear superficially similar to dropout in deep neural networks, but the two techniques differ. One primary difference is that, in dropout, the penalty value grows exponentially in the network’s depth in cases, whereas weight decay’s penalty value grows linearly. Some believe this allows dropout to more meaningfully penalize network complexity than weight decay.13

Many online articles and tutorials incorrectly conflate L2 regularization and weight decay. In fact, scholarship is inconsistent—some distinguish between L2 and weight decay,14 some equate them,15 while others are inconsistent in describing the relationship between them.16 Resolving such inconsistencies in terminology is a needed yet overlooked area for future scholarship.

Footnotes

1 Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016, https://www.deeplearningbook.org/

2 Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani, and Jonathan Taylor, An Introduction to Statistical Learning with Applications in Python, Springer, 2023, https://link.springer.com/book/10.1007/978-3-031-38747-0

3 Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016, https://www.deeplearningbook.org/

4 Vincent Vandenbussche, The Regularization Cookbook, Packt Publishing, 2023.

5 Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani, and Jonathan Taylor, An Introduction to Statistical Learning with Applications in Python, Springer, 2023, https://link.springer.com/book/10.1007/978-3-031-38747-0

6 Max Kuhn and Kjell Johnson, Applied Predictive Modeling, Springer, 2016. Ludwig Fahrmeir, Thomas Kneib, Stefan Lang, and Brian D. Marx, Regression: Models, Methods and Applications, 2nd edition, Springer, 2021.

7 Trong-Hieu Nguyen-Mau, Tuan-Luc Huynh, Thanh-Danh Le, Hai-Dang Nguyen, and Minh-Triet Tran, "Advanced Augmentation and Ensemble Approaches for Classifying Long-Tailed Multi-Label Chest X-Rays," Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 2729-2738, https://openaccess.thecvf.com/content/ICCV2023W/CVAMD/html/Nguyen-Mau_Advanced_Augmentation_and_Ensemble_Approaches_for_Classifying_Long-Tailed_Multi-Label_Chest_ICCVW_2023_paper.html . Changhyun Kim, Giyeol Kim, Sooyoung Yang, Hyunsu Kim, Sangyool Lee, and Hansu Cho"Chest X-Ray Feature Pyramid Sum Model with Diseased Area Data Augmentation Method," Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 2757-2766, https://openaccess.thecvf.com/content/ICCV2023W/CVAMD/html/Kim_Chest_X-Ray_Feature_Pyramid_Sum_Model_with_Diseased_Area_Data_ICCVW_2023_paper.html

8 Grégoire Montavon, Geneviève B. Orr, and Klaus-Robert Müller, Neural Networks: Tricks of the Trade, 2nd edition, Springer, 2012.

9 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, "Dropout: A Simple Way to Prevent Neural Networks from Overfitting," Journal of Machine Learning Research, Vol. 15, No. 56, 2014, pp. 1929−1958, https://jmlr.org/papers/v15/srivastava14a.html

10 Max Kuhn and Kjell Johnson, Applied Predictive Modeling, Springer, 2016.

11 Rahul Parhi and Robert D. Nowak, "Deep Learning Meets Sparse Regularization: A Signal Processing Perspective," IEEE Signal Processing Magazine, Vol. 40, No. 6, 2023, pp. 63-74, https://arxiv.org/abs/2301.09554

12 Stephen Hanson and Lorien Pratt, "Comparing Biases for Minimal Network Construction with Back-Propagation," Advances in Neural Information Processing Systems 1, 1988, pp. 177-185, https://proceedings.neurips.cc/paper/1988/file/1c9ac0159c94d8d0cbedc973445af2da-Paper.pdf

13 David P. Helmbold, Philip M. Long, "Surprising properties of dropout in deep networks," Journal of Machine Learning Research, Vol. 18, No. 200, 2018, pp. 1−28, https://jmlr.org/papers/v18/16-549.html

14 Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse, "Three Mechanisms of Weight Decay Regularization," International Conference on Learning Representations (ILCR) 2019, https://arxiv.org/abs/1810.12281

15 David P. Helmbold and Philip M. Long, "Fundamental Differences between Dropout and Weight Decay in Deep Networks," 2017, https://arxiv.org/abs/1602.04484v3

16 Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, MIT Press, 2016, https://www.deeplearningbook.org/

Related solutions IBM watsonx.ai

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Discover watsonx.ai
Artificial intelligence solutions

Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Get one-stop access to capabilities that span the AI development lifecycle. Produce powerful AI solutions with user-friendly interfaces, workflows and access to industry-standard APIs and SDKs.

Explore watsonx.ai Book a live demo