Regularization: Difference between revisions
Thakshashila (talk | contribs) Created page with "= Regularization = '''Regularization''' is a technique in machine learning used to prevent '''overfitting''' by adding extra constraints or penalties to a model during training. == Why Regularization is Important == Overfitting happens when a model learns noise and details from the training data, harming its ability to generalize on new data. Regularization discourages overly complex models by penalizing large or unnecessary model parameters. == Common Types of Regul..." |
(No difference)
|
Revision as of 05:58, 10 June 2025
Regularization
Regularization is a technique in machine learning used to prevent overfitting by adding extra constraints or penalties to a model during training.
Why Regularization is Important
Overfitting happens when a model learns noise and details from the training data, harming its ability to generalize on new data. Regularization discourages overly complex models by penalizing large or unnecessary model parameters.
Common Types of Regularization
1. L1 Regularization (Lasso)
- Adds the sum of the absolute values of coefficients as a penalty to the loss function.
- Encourages sparsity, meaning it can reduce some feature weights to zero, effectively performing feature selection.
- Loss function example:
2. L2 Regularization (Ridge)
- Adds the sum of squared coefficients as a penalty.
- Penalizes large weights but does not force them to zero.
- Encourages smaller, more evenly distributed weights.
- Loss function example:
3. Elastic Net
- Combines L1 and L2 penalties to balance sparsity and weight shrinkage.
- Useful when many correlated features exist.
How Regularization Works
By adding penalty terms, the model’s objective function becomes:
Where (lambda) controls the strength of regularization — higher values increase the penalty.
Benefits of Regularization
- Reduces overfitting.
- Improves model generalization on unseen data.
- Can perform feature selection (especially L1).
- Helps in models with many features or noisy data.
Example
In linear regression without regularization, the model might fit the training data perfectly but fail on test data (overfitting). Adding L2 regularization shrinks coefficients, leading to a simpler model that generalizes better.
Related Pages
SEO Keywords
regularization in machine learning, l1 regularization, l2 regularization, preventing overfitting, ridge regression, lasso regression, elastic net, model generalization, penalty methods in ML