<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Precision-Recall_Curve</id>
	<title>Precision-Recall Curve - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Precision-Recall_Curve"/>
	<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Precision-Recall_Curve&amp;action=history"/>
	<updated>2026-05-14T15:36:35Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Precision-Recall_Curve&amp;diff=220&amp;oldid=prev</id>
		<title>Thakshashila: /* SEO Keywords */</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Precision-Recall_Curve&amp;diff=220&amp;oldid=prev"/>
		<updated>2025-06-10T06:24:07Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;SEO Keywords&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 06:24, 10 June 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l80&quot;&gt;Line 80:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 80:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;precision recall curve, pr curve machine learning, how to read precision recall curve, precision recall vs roc curve, imbalanced classification metrics, auc pr curve, precision recall tradeoff&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;precision recall curve, pr curve machine learning, how to read precision recall curve, precision recall vs roc curve, imbalanced classification metrics, auc pr curve, precision recall tradeoff&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Artificial Intelligence]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Precision-Recall_Curve&amp;diff=171&amp;oldid=prev</id>
		<title>Thakshashila: Created page with &quot;= Precision-Recall Curve =  The &#039;&#039;&#039;Precision-Recall Curve&#039;&#039;&#039; (PR Curve) is a graphical representation used to evaluate the performance of binary classification models, especially on &#039;&#039;&#039;imbalanced datasets&#039;&#039;&#039; where the positive class is rare.  It plots &#039;&#039;&#039;Precision&#039;&#039;&#039; (y-axis) against &#039;&#039;&#039;Recall&#039;&#039;&#039; (x-axis) for different classification thresholds.  == Why Use Precision-Recall Curve? ==  In many real-world problems like fraud detection, disease diagnosis, or spam filtering,...&quot;</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Precision-Recall_Curve&amp;diff=171&amp;oldid=prev"/>
		<updated>2025-06-10T05:32:05Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;= Precision-Recall Curve =  The &amp;#039;&amp;#039;&amp;#039;Precision-Recall Curve&amp;#039;&amp;#039;&amp;#039; (PR Curve) is a graphical representation used to evaluate the performance of binary classification models, especially on &amp;#039;&amp;#039;&amp;#039;imbalanced datasets&amp;#039;&amp;#039;&amp;#039; where the positive class is rare.  It plots &amp;#039;&amp;#039;&amp;#039;Precision&amp;#039;&amp;#039;&amp;#039; (y-axis) against &amp;#039;&amp;#039;&amp;#039;Recall&amp;#039;&amp;#039;&amp;#039; (x-axis) for different classification thresholds.  == Why Use Precision-Recall Curve? ==  In many real-world problems like fraud detection, disease diagnosis, or spam filtering,...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Precision-Recall Curve =&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;Precision-Recall Curve&amp;#039;&amp;#039;&amp;#039; (PR Curve) is a graphical representation used to evaluate the performance of binary classification models, especially on &amp;#039;&amp;#039;&amp;#039;imbalanced datasets&amp;#039;&amp;#039;&amp;#039; where the positive class is rare.&lt;br /&gt;
&lt;br /&gt;
It plots &amp;#039;&amp;#039;&amp;#039;Precision&amp;#039;&amp;#039;&amp;#039; (y-axis) against &amp;#039;&amp;#039;&amp;#039;Recall&amp;#039;&amp;#039;&amp;#039; (x-axis) for different classification thresholds.&lt;br /&gt;
&lt;br /&gt;
== Why Use Precision-Recall Curve? ==&lt;br /&gt;
&lt;br /&gt;
In many real-world problems like fraud detection, disease diagnosis, or spam filtering, the positive class is much less frequent than the negative class. Traditional metrics like [[ROC Curve]] or [[Accuracy]] can be misleading in such cases.&lt;br /&gt;
&lt;br /&gt;
The Precision-Recall curve focuses on the performance of the positive class, showing how precision and recall change as the classification threshold varies.&lt;br /&gt;
&lt;br /&gt;
== Definitions ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Precision&amp;#039;&amp;#039;&amp;#039; measures the proportion of correctly predicted positive observations to all predicted positives:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \text{Precision} = \frac{TP}{TP + FP} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Recall&amp;#039;&amp;#039;&amp;#039; (Sensitivity) measures the proportion of correctly predicted positive observations to all actual positives:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \text{Recall} = \frac{TP}{TP + FN} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where:&lt;br /&gt;
&lt;br /&gt;
* TP = True Positives  &lt;br /&gt;
* FP = False Positives  &lt;br /&gt;
* FN = False Negatives&lt;br /&gt;
&lt;br /&gt;
== How to Interpret the Curve ==&lt;br /&gt;
&lt;br /&gt;
- The top-right corner (Precision=1, Recall=1) represents perfect classification.  &lt;br /&gt;
- A high area under the PR curve indicates both high precision and recall.  &lt;br /&gt;
- The curve helps to select the best threshold by balancing false positives and false negatives.&lt;br /&gt;
&lt;br /&gt;
== Area Under the Precision-Recall Curve (AUPRC) ==&lt;br /&gt;
&lt;br /&gt;
Similar to ROC AUC, the &amp;#039;&amp;#039;&amp;#039;Area Under the Precision-Recall Curve&amp;#039;&amp;#039;&amp;#039; (AUPRC) summarizes the model’s ability to balance precision and recall.&lt;br /&gt;
&lt;br /&gt;
* Higher AUPRC means better model performance on the positive class.&lt;br /&gt;
* Unlike ROC AUC, AUPRC is more informative with highly skewed data.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
Imagine a spam detection system:&lt;br /&gt;
&lt;br /&gt;
- At a low threshold, many emails are classified as spam (high recall) but many legitimate emails are incorrectly flagged (low precision).&lt;br /&gt;
- At a high threshold, only very confident spam emails are flagged (high precision) but some spam emails go undetected (low recall).&lt;br /&gt;
- The PR curve shows how precision and recall trade off as the threshold changes.&lt;br /&gt;
&lt;br /&gt;
== Precision-Recall Curve vs ROC Curve ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Aspect&lt;br /&gt;
! Precision-Recall Curve&lt;br /&gt;
! ROC Curve&lt;br /&gt;
|-&lt;br /&gt;
| Best used when&lt;br /&gt;
| Positive class is rare / imbalanced&lt;br /&gt;
| Classes are balanced or costs similar&lt;br /&gt;
|-&lt;br /&gt;
| Focus&lt;br /&gt;
| Performance on positive class&lt;br /&gt;
| Trade-off between TPR and FPR (sensitivity and specificity)&lt;br /&gt;
|-&lt;br /&gt;
| Interpretation&lt;br /&gt;
| Emphasizes false positives impact on precision&lt;br /&gt;
| Emphasizes false positives rate relative to negatives&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Related Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Precision]]&lt;br /&gt;
* [[Recall]]&lt;br /&gt;
* [[F1 Score]]&lt;br /&gt;
* [[ROC Curve]]&lt;br /&gt;
* [[Confusion Matrix]]&lt;br /&gt;
* [[Imbalanced Data]]&lt;br /&gt;
&lt;br /&gt;
== SEO Keywords ==&lt;br /&gt;
&lt;br /&gt;
precision recall curve, pr curve machine learning, how to read precision recall curve, precision recall vs roc curve, imbalanced classification metrics, auc pr curve, precision recall tradeoff&lt;/div&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
</feed>