Fairness Metrics in Model Evaluation Quiz Quiz

Explore your understanding of fairness metrics in machine learning model evaluation with these beginner-friendly questions. Learn key concepts such as disparate impact, demographic parity, equal opportunity, and more to assess ethical and unbiased model performance.

  1. Demographic Parity Understanding

    Which fairness metric requires that the proportion of positive predictions be equal across different demographic groups, regardless of actual outcomes?

    1. Equal Accuracy
    2. Predictive Value Parity
    3. Specificity
    4. Demographic Parity

    Explanation: Demographic Parity demands that a model produces positive outcomes at the same rate for all demographic groups, promoting equal treatment. Equal Accuracy is incorrect, as it refers to equal performance rather than equal positive rates. Predictive Value Parity focuses on the probability of correctness, not outcome rates. Specificity measures true negatives, unrelated to demographic consideration.

  2. Disparate Impact Scenario

    If a hiring model selects 50% of applicants from group A and 25% from group B, which fairness concept is potentially being violated?

    1. Disparate Impact
    2. Recall Parity
    3. Calibration
    4. False Omission Rate

    Explanation: Disparate Impact refers to a substantially different selection rate between groups, as in this scenario. Calibration involves aligning predicted and actual risk but doesn't focus on selection rates. Recall Parity relates to true positive rates across groups and isn't directly about selection. False Omission Rate is about incorrect negative predictions, not overall selection disparity.

  3. Equal Opportunity Defined

    What does the fairness metric 'Equal Opportunity' require for a model used to predict loan approvals?

    1. Balanced overall accuracy
    2. Equal true positive rates across groups
    3. Equal false positive rates across groups
    4. Equal number of applicants in each group

    Explanation: Equal Opportunity ensures that qualified candidates, or those who should receive a positive prediction, have equal chances regardless of group. It doesn't guarantee equal group sizes, making the second option incorrect. Equal false positive rates describe a different fairness metric (Equalized Odds), and balanced overall accuracy is about performance, not opportunity.

  4. Statistical Parity Focus

    Which of the following best describes statistical parity in model evaluation?

    1. An equal proportion of positive outcomes among protected groups
    2. Identical ROC curves across groups
    3. A symmetric confusion matrix
    4. Equal likelihood of negative outcomes for all applicants

    Explanation: Statistical parity, synonymous with demographic parity, focuses on ensuring equal chances of positive outcomes. Equal likelihood of negative outcomes pertains to different metrics. A symmetric confusion matrix is unrelated to fairness between groups, while identical ROC curves focus on overall model performance, not parity of outcomes.

  5. Predictive Parity Choice

    When a model’s positive predictions are equally accurate for all groups, which fairness metric does this most closely describe?

    1. Equalized Odds
    2. Balanced Error Rate
    3. Predictive Parity
    4. Base Rate Parity

    Explanation: Predictive Parity ensures that the probability of a correct positive prediction is the same across groups. Base Rate Parity is about equal probability of belonging to a group. Equalized Odds requires both true and false positive rates to be equal, while Balanced Error Rate averages errors, not prediction correctness.

  6. Equalized Odds Explanation

    What does the fairness metric 'Equalized Odds' require from a predictive model in healthcare?

    1. Identical threshold values for each group
    2. Equal true and false positive rates across groups
    3. Same disease prevalence for all patients
    4. Maximal specificity for every group

    Explanation: Equalized Odds aims for both true positive and false positive rates to be equal among groups, ensuring fair errors and successes. Disease prevalence is about world facts, not a model metric. Maximizing specificity doesn't address fairness between groups. Identical threshold values might not guarantee fairness if distributions differ.

  7. Calibration in Context

    In the context of fairness, what does model calibration refer to?

    1. Assigning identical predictions to all individuals
    2. Maximizing true negatives regardless of group
    3. Matching predicted probabilities to observed outcome rates within each group
    4. Ensuring equal number of false positives per group

    Explanation: Calibration in fairness involves ensuring that, for each group, predicted probabilities align well with actual observed outcomes. Assigning identical predictions would ignore individualized predictions, rendering models useless. Maximizing true negatives or balancing false positives are different concerns, not calibration.

  8. False Positive Rate Parity

    A school admission classifier wrongly predicts more students from one group as qualified than another; which metric has likely failed?

    1. True Negative Rate
    2. Overall Precision
    3. False Positive Rate Parity
    4. Log Loss

    Explanation: False Positive Rate Parity seeks to ensure groups are equally likely to be incorrectly assigned positive outcomes. Overall precision measures accuracy of positive predictions, not group fairness. True negative rate measures correct rejections, but doesn’t address false positives. Log loss evaluates probabilistic errors, not parity between groups.

  9. Importance of Fairness Metrics

    Why are fairness metrics important in evaluating machine learning models for social applications?

    1. They help detect and reduce bias that might harm certain groups
    2. They guarantee perfect fairness in every outcome
    3. They replace the need for accuracy and precision
    4. They only improve the speed of model training

    Explanation: Fairness metrics help identify and address potential biases affecting outcomes experienced by specific groups. While speed is important, fairness metrics do not focus on computation time, making the second option wrong. They supplement—not replace—accuracy and precision. Achieving perfect fairness is often unrealistic, so the last choice is incorrect.

  10. Choosing a Fairness Metric

    When should you choose Equal Opportunity over Demographic Parity for evaluating a model?

    1. When demographic groups have equal base rates
    2. To guarantee identical overall outcomes for all individuals
    3. When you want to ensure qualified individuals from all groups have an equal chance at a positive outcome
    4. When your model has no protected groups

    Explanation: Equal Opportunity is appropriate when focusing on fairness for those truly eligible, ensuring equal true positive rates. Lack of protected groups negates the need for group-based fairness metrics. Equal base rates refer to population statistics, not the metric's suitability. Guaranteeing identical overall outcomes is neither practical nor the goal of Equal Opportunity.