<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Regularization</id>
	<title>Regularization - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Regularization"/>
	<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Regularization&amp;action=history"/>
	<updated>2026-05-15T11:19:14Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Regularization&amp;diff=224&amp;oldid=prev</id>
		<title>Thakshashila: /* SEO Keywords */</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Regularization&amp;diff=224&amp;oldid=prev"/>
		<updated>2025-06-10T06:24:48Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;SEO Keywords&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 06:24, 10 June 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l59&quot;&gt;Line 59:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 59:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;regularization in machine learning, l1 regularization, l2 regularization, preventing overfitting, ridge regression, lasso regression, elastic net, model generalization, penalty methods in ML&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;regularization in machine learning, l1 regularization, l2 regularization, preventing overfitting, ridge regression, lasso regression, elastic net, model generalization, penalty methods in ML&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Artificial Intelligence]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Regularization&amp;diff=189&amp;oldid=prev</id>
		<title>Thakshashila: Created page with &quot;= Regularization =  &#039;&#039;&#039;Regularization&#039;&#039;&#039; is a technique in machine learning used to prevent &#039;&#039;&#039;overfitting&#039;&#039;&#039; by adding extra constraints or penalties to a model during training.  == Why Regularization is Important ==  Overfitting happens when a model learns noise and details from the training data, harming its ability to generalize on new data. Regularization discourages overly complex models by penalizing large or unnecessary model parameters.  == Common Types of Regul...&quot;</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Regularization&amp;diff=189&amp;oldid=prev"/>
		<updated>2025-06-10T05:58:51Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;= Regularization =  &amp;#039;&amp;#039;&amp;#039;Regularization&amp;#039;&amp;#039;&amp;#039; is a technique in machine learning used to prevent &amp;#039;&amp;#039;&amp;#039;overfitting&amp;#039;&amp;#039;&amp;#039; by adding extra constraints or penalties to a model during training.  == Why Regularization is Important ==  Overfitting happens when a model learns noise and details from the training data, harming its ability to generalize on new data. Regularization discourages overly complex models by penalizing large or unnecessary model parameters.  == Common Types of Regul...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Regularization =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Regularization&amp;#039;&amp;#039;&amp;#039; is a technique in machine learning used to prevent &amp;#039;&amp;#039;&amp;#039;overfitting&amp;#039;&amp;#039;&amp;#039; by adding extra constraints or penalties to a model during training.&lt;br /&gt;
&lt;br /&gt;
== Why Regularization is Important ==&lt;br /&gt;
&lt;br /&gt;
Overfitting happens when a model learns noise and details from the training data, harming its ability to generalize on new data. Regularization discourages overly complex models by penalizing large or unnecessary model parameters.&lt;br /&gt;
&lt;br /&gt;
== Common Types of Regularization ==&lt;br /&gt;
&lt;br /&gt;
=== 1. L1 Regularization (Lasso) ===&lt;br /&gt;
&lt;br /&gt;
* Adds the sum of the absolute values of coefficients as a penalty to the loss function.  &lt;br /&gt;
* Encourages sparsity, meaning it can reduce some feature weights to zero, effectively performing feature selection.  &lt;br /&gt;
* Loss function example:  &lt;br /&gt;
:&amp;lt;math&amp;gt; L = Loss_{original} + \lambda \sum_{i} |w_i| &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2. L2 Regularization (Ridge) ===&lt;br /&gt;
&lt;br /&gt;
* Adds the sum of squared coefficients as a penalty.  &lt;br /&gt;
* Penalizes large weights but does not force them to zero.  &lt;br /&gt;
* Encourages smaller, more evenly distributed weights.  &lt;br /&gt;
* Loss function example:  &lt;br /&gt;
:&amp;lt;math&amp;gt; L = Loss_{original} + \lambda \sum_{i} w_i^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3. Elastic Net ===&lt;br /&gt;
&lt;br /&gt;
* Combines L1 and L2 penalties to balance sparsity and weight shrinkage.  &lt;br /&gt;
* Useful when many correlated features exist.&lt;br /&gt;
&lt;br /&gt;
== How Regularization Works ==&lt;br /&gt;
&lt;br /&gt;
By adding penalty terms, the model’s objective function becomes:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \text{Objective} = \text{Original Loss} + \lambda \times \text{Penalty} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;math&amp;gt; \lambda &amp;lt;/math&amp;gt; (lambda) controls the strength of regularization — higher values increase the penalty.&lt;br /&gt;
&lt;br /&gt;
== Benefits of Regularization ==&lt;br /&gt;
&lt;br /&gt;
* Reduces overfitting.  &lt;br /&gt;
* Improves model generalization on unseen data.  &lt;br /&gt;
* Can perform feature selection (especially L1).  &lt;br /&gt;
* Helps in models with many features or noisy data.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
In linear regression without regularization, the model might fit the training data perfectly but fail on test data (overfitting). Adding L2 regularization shrinks coefficients, leading to a simpler model that generalizes better.&lt;br /&gt;
&lt;br /&gt;
== Related Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Overfitting]]  &lt;br /&gt;
* [[Underfitting]]  &lt;br /&gt;
* [[Bias-Variance Tradeoff]]  &lt;br /&gt;
* [[Hyperparameter Tuning]]  &lt;br /&gt;
* [[Model Evaluation Metrics]]&lt;br /&gt;
&lt;br /&gt;
== SEO Keywords ==&lt;br /&gt;
&lt;br /&gt;
regularization in machine learning, l1 regularization, l2 regularization, preventing overfitting, ridge regression, lasso regression, elastic net, model generalization, penalty methods in ML&lt;/div&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
</feed>