ROC Curve: Difference between revisions
Thakshashila (talk | contribs) Created page with "= ROC Curve = The '''ROC Curve''' ('''Receiver Operating Characteristic Curve''') is a graphical tool used to evaluate the performance of binary classification models. It plots the '''True Positive Rate (TPR)''' against the '''False Positive Rate (FPR)''' at various threshold settings. == Purpose == The ROC Curve shows the trade-off between sensitivity (recall) and specificity. It helps assess how well a classifier can distinguish between two classes. == Definitions..." |
Thakshashila (talk | contribs) |
||
Line 63: | Line 63: | ||
== Limitations == | == Limitations == | ||
* Can be | * Can be '''overly optimistic''' on highly imbalanced data. | ||
* In such cases, use the [[Precision-Recall Curve]]. | * In such cases, use the [[Precision-Recall Curve]]. | ||