Skip to content

Regularization

Techniques used to prevent overfitting by adding constraints or penalties during model training. Common methods include L1/L2 weight penalties, dropout (randomly disabling neurons), and early stopping.

Related terms

OverfittingNeural Network
← Back to glossary