By increasing bias and decreasing variance, regularization resolves model overfitting. Overfitting occurs when error on training data decreases while error on testing data ceases decreasing or begins increasing.3 In other words, overfitting describes models with low bias and high variance. However, if regularization introduces too much bias, then a model will underfit.
Despite its name, underfitting does not denote overfitting’s opposite. Rather underfitting describes models characterized by high bias and high variance. An underfitted model produces unsatisfactorily erroneous predictions during training and testing. This often results from insufficient training data or parameters.
Regularization, however, can potentially lead to model underfitting as well. If too much bias is introduced through regularization, model variance can cease to decrease and even increase. Regularization may have this effect particularly on simple models, that is, models with few parameters. In determining the type and degree of regularization to implement, then, one must consider a model’s complexity, dataset, and so forth.4