<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Evaluation_Metrics</id>
	<title>Evaluation Metrics - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Evaluation_Metrics"/>
	<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Evaluation_Metrics&amp;action=history"/>
	<updated>2026-05-15T09:17:45Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Evaluation_Metrics&amp;diff=204&amp;oldid=prev</id>
		<title>Thakshashila: /* SEO Keywords */</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Evaluation_Metrics&amp;diff=204&amp;oldid=prev"/>
		<updated>2025-06-10T06:20:27Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;SEO Keywords&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 06:20, 10 June 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l71&quot;&gt;Line 71:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 71:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;evaluation metrics in machine learning, classification metrics, regression evaluation metrics, precision recall f1 score, roc auc explained, mean squared error, choosing evaluation metrics, model performance measures&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;evaluation metrics in machine learning, classification metrics, regression evaluation metrics, precision recall f1 score, roc auc explained, mean squared error, choosing evaluation metrics, model performance measures&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Artificial Intelligence]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Evaluation_Metrics&amp;diff=179&amp;oldid=prev</id>
		<title>Thakshashila: Created page with &quot;= Evaluation Metrics =  &#039;&#039;&#039;Evaluation Metrics&#039;&#039;&#039; are quantitative measures used to assess the performance of machine learning models. Choosing the right metric is essential for understanding how well a model performs, especially in classification and regression problems.  == Why Are Evaluation Metrics Important? ==  * Provide objective criteria to compare different models. * Help detect issues like overfitting or underfitting. * Guide model improvement and selection. * R...&quot;</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Evaluation_Metrics&amp;diff=179&amp;oldid=prev"/>
		<updated>2025-06-10T05:46:52Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;= Evaluation Metrics =  &amp;#039;&amp;#039;&amp;#039;Evaluation Metrics&amp;#039;&amp;#039;&amp;#039; are quantitative measures used to assess the performance of machine learning models. Choosing the right metric is essential for understanding how well a model performs, especially in classification and regression problems.  == Why Are Evaluation Metrics Important? ==  * Provide objective criteria to compare different models. * Help detect issues like overfitting or underfitting. * Guide model improvement and selection. * R...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Evaluation Metrics =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Evaluation Metrics&amp;#039;&amp;#039;&amp;#039; are quantitative measures used to assess the performance of machine learning models. Choosing the right metric is essential for understanding how well a model performs, especially in classification and regression problems.&lt;br /&gt;
&lt;br /&gt;
== Why Are Evaluation Metrics Important? ==&lt;br /&gt;
&lt;br /&gt;
* Provide objective criteria to compare different models.&lt;br /&gt;
* Help detect issues like overfitting or underfitting.&lt;br /&gt;
* Guide model improvement and selection.&lt;br /&gt;
* Reflect the business or real-world importance of model predictions.&lt;br /&gt;
&lt;br /&gt;
== Types of Evaluation Metrics ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Classification Metrics ===&lt;br /&gt;
&lt;br /&gt;
These metrics evaluate models that predict discrete categories (classes).&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Accuracy:&amp;#039;&amp;#039;&amp;#039; Proportion of correct predictions over total predictions.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; \text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Precision:&amp;#039;&amp;#039;&amp;#039; Proportion of true positives among all predicted positives.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; \text{Precision} = \frac{TP}{TP + FP} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Recall (Sensitivity):&amp;#039;&amp;#039;&amp;#039; Proportion of true positives among all actual positives.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; \text{Recall} = \frac{TP}{TP + FN} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;F1 Score:&amp;#039;&amp;#039;&amp;#039; Harmonic mean of precision and recall, balancing both.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; F1 = 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Specificity:&amp;#039;&amp;#039;&amp;#039; Proportion of true negatives among all actual negatives.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; \text{Specificity} = \frac{TN}{TN + FP} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;ROC AUC:&amp;#039;&amp;#039;&amp;#039; Area under the Receiver Operating Characteristic curve, measuring true positive rate vs false positive rate across thresholds.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Precision-Recall AUC (AUPRC):&amp;#039;&amp;#039;&amp;#039; Area under the Precision-Recall curve, especially useful for imbalanced data.&lt;br /&gt;
&lt;br /&gt;
=== 2. Regression Metrics ===&lt;br /&gt;
&lt;br /&gt;
Used for models predicting continuous values.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Mean Absolute Error (MAE):&amp;#039;&amp;#039;&amp;#039; Average absolute difference between predicted and actual values.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; MAE = \frac{1}{n} \sum_{i=1}^n | y_i - \hat{y}_i | &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Mean Squared Error (MSE):&amp;#039;&amp;#039;&amp;#039; Average squared difference, penalizes larger errors more.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; MSE = \frac{1}{n} \sum_{i=1}^n ( y_i - \hat{y}_i )^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Root Mean Squared Error (RMSE):&amp;#039;&amp;#039;&amp;#039; Square root of MSE, in same units as output.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; RMSE = \sqrt{MSE} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;R-squared (R²):&amp;#039;&amp;#039;&amp;#039; Proportion of variance explained by the model.  &lt;br /&gt;
  :&amp;lt;math&amp;gt; R^2 = 1 - \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Choosing the Right Metric ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Imbalanced Classification:&amp;#039;&amp;#039;&amp;#039; Use Precision, Recall, F1 Score, or AUPRC instead of accuracy.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Cost-Sensitive Tasks:&amp;#039;&amp;#039;&amp;#039; Consider metrics that weigh errors differently.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Regression:&amp;#039;&amp;#039;&amp;#039; Use MAE or RMSE based on error tolerance.&lt;br /&gt;
&lt;br /&gt;
== Related Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Confusion Matrix]]&lt;br /&gt;
* [[Precision]]&lt;br /&gt;
* [[Recall]]&lt;br /&gt;
* [[F1 Score]]&lt;br /&gt;
* [[ROC Curve]]&lt;br /&gt;
* [[AUC Score]]&lt;br /&gt;
* [[Imbalanced Data]]&lt;br /&gt;
* [[Cost-Sensitive Learning]]&lt;br /&gt;
&lt;br /&gt;
== SEO Keywords ==&lt;br /&gt;
&lt;br /&gt;
evaluation metrics in machine learning, classification metrics, regression evaluation metrics, precision recall f1 score, roc auc explained, mean squared error, choosing evaluation metrics, model performance measures&lt;/div&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
</feed>