<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Normalization_%28Machine_Learning%29</id>
	<title>Normalization (Machine Learning) - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Normalization_%28Machine_Learning%29"/>
	<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Normalization_(Machine_Learning)&amp;action=history"/>
	<updated>2026-05-15T12:03:02Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Normalization_(Machine_Learning)&amp;diff=233&amp;oldid=prev</id>
		<title>Thakshashila: Created page with &quot;= Normalization (Machine Learning) =  &#039;&#039;&#039;Normalization&#039;&#039;&#039; in machine learning is a data preprocessing technique used to scale input features so they fall within a similar range, typically between 0 and 1. This helps improve model performance, especially for algorithms sensitive to the scale of data.  == Why Normalize Data? ==  Some machine learning algorithms (e.g., K-Nearest Neighbors, Gradient Descent-based models, Neural Networks) perform better when input features ar...&quot;</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Normalization_(Machine_Learning)&amp;diff=233&amp;oldid=prev"/>
		<updated>2025-06-10T06:34:09Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;= Normalization (Machine Learning) =  &amp;#039;&amp;#039;&amp;#039;Normalization&amp;#039;&amp;#039;&amp;#039; in machine learning is a data preprocessing technique used to scale input features so they fall within a similar range, typically between 0 and 1. This helps improve model performance, especially for algorithms sensitive to the scale of data.  == Why Normalize Data? ==  Some machine learning algorithms (e.g., K-Nearest Neighbors, Gradient Descent-based models, Neural Networks) perform better when input features ar...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Normalization (Machine Learning) =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Normalization&amp;#039;&amp;#039;&amp;#039; in machine learning is a data preprocessing technique used to scale input features so they fall within a similar range, typically between 0 and 1. This helps improve model performance, especially for algorithms sensitive to the scale of data.&lt;br /&gt;
&lt;br /&gt;
== Why Normalize Data? ==&lt;br /&gt;
&lt;br /&gt;
Some machine learning algorithms (e.g., K-Nearest Neighbors, Gradient Descent-based models, Neural Networks) perform better when input features are on a similar scale. Without normalization, features with larger numeric ranges may dominate others, leading to biased results.&lt;br /&gt;
&lt;br /&gt;
== Common Normalization Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Min-Max Normalization ===&lt;br /&gt;
&lt;br /&gt;
Scales features to a fixed range, usually [0, 1].&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
x&amp;#039; = \frac{x - x_{\text{min}}}{x_{\text{max}} - x_{\text{min}}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Best for bounded data.&lt;br /&gt;
* Sensitive to outliers.&lt;br /&gt;
&lt;br /&gt;
=== 2. Z-score Normalization (Standardization) ===&lt;br /&gt;
&lt;br /&gt;
Centers the data around the mean with unit variance.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
x&amp;#039; = \frac{x - \mu}{\sigma}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where:  &lt;br /&gt;
* &amp;lt;math&amp;gt;\mu&amp;lt;/math&amp;gt; = mean of the feature  &lt;br /&gt;
* &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt; = standard deviation&lt;br /&gt;
&lt;br /&gt;
* Useful for algorithms that assume Gaussian distribution.&lt;br /&gt;
&lt;br /&gt;
=== 3. Max Abs Scaling ===&lt;br /&gt;
&lt;br /&gt;
Scales data by dividing by the maximum absolute value:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
x&amp;#039; = \frac{x}{|x_{\text{max}}|}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Preserves zero entries in sparse data.&lt;br /&gt;
&lt;br /&gt;
=== 4. Robust Scaling ===&lt;br /&gt;
&lt;br /&gt;
Uses median and interquartile range (IQR) to scale:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
x&amp;#039; = \frac{x - \text{median}}{\text{IQR}}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Less sensitive to outliers.&lt;br /&gt;
&lt;br /&gt;
== When to Use Normalization ==&lt;br /&gt;
&lt;br /&gt;
Use normalization when:&lt;br /&gt;
* Input features are measured in different units or ranges.&lt;br /&gt;
* You use distance-based algorithms (e.g., k-NN, SVM).&lt;br /&gt;
* You&amp;#039;re training neural networks using gradient descent.&lt;br /&gt;
&lt;br /&gt;
== When Not to Normalize ==&lt;br /&gt;
&lt;br /&gt;
* When using tree-based algorithms like Decision Trees or Random Forests (these are insensitive to feature scale).&lt;br /&gt;
* When your features are already on the same scale or naturally bounded.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
If feature A ranges from 1 to 1000 and feature B from 0 to 1:&lt;br /&gt;
* A normalized model ensures both features contribute equally to model training.&lt;br /&gt;
* Without normalization, feature A may dominate due to its larger range.&lt;br /&gt;
&lt;br /&gt;
== Related Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Feature Scaling]]  &lt;br /&gt;
* [[Standardization]]  &lt;br /&gt;
* [[Preprocessing (Machine Learning)]]  &lt;br /&gt;
* [[K-Nearest Neighbors]]  &lt;br /&gt;
* [[Gradient Descent]]&lt;br /&gt;
&lt;br /&gt;
== SEO Keywords ==&lt;br /&gt;
&lt;br /&gt;
normalization in machine learning, min max normalization, z score normalization, feature scaling ML, standardization vs normalization, when to normalize data, data preprocessing techniques&lt;/div&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
</feed>