Specificity

Specificity, also known as the True Negative Rate (TNR), is a performance metric in binary classification tasks. It measures the proportion of actual negative instances that are correctly identified by the model.

Definition

Specificity=TNTN+FP

Where:

  • TN = True Negatives – actual negatives correctly predicted
  • FP = False Positives – actual negatives incorrectly predicted as positives

Specificity answers the question: "Out of all real negative cases, how many did the model correctly classify as negative?"

Alternate Names

  • True Negative Rate (TNR)
  • Selectivity

Simple Example

Suppose a test is used to detect a rare disease. Out of 1,000 healthy people:

  • 950 are correctly identified as healthy → TN = 950
  • 50 are incorrectly diagnosed with the disease → FP = 50
Specificity=950950+50=9501000=0.95=95%

This means the test correctly identifies 95% of healthy people.

Importance of Specificity

Specificity is vital when false positives can cause unnecessary stress, cost, or risk.

Real-World Scenarios

  • Medical Testing: Avoiding false diagnoses of a disease (e.g., not telling a healthy person they are sick).
  • Spam Filters: Ensuring genuine emails are not classified as spam.
  • Fraud Detection: Not labeling legitimate transactions as fraudulent.

Specificity vs Sensitivity

These are complementary metrics:

  • Sensitivity = Ability to detect positives
  • Specificity = Ability to rule out negatives

Together, they form a balanced evaluation of a model, especially in medical or safety-critical applications.

Combined Evaluation: ROC Curve

Receiver Operating Characteristic (ROC) curves plot:

  • Sensitivity (True Positive Rate) vs.
  • 1 − Specificity (False Positive Rate)

This helps visualize the trade-off between catching positives and avoiding false alarms.

Related Metrics

SEO Keywords

specificity in machine learning, true negative rate, sensitivity vs specificity, specificity formula, confusion matrix specificity, model evaluation metrics, binary classification