Fairness in AI: Disparate Impact vs Equal Opportunity Quiz Quiz

Explore fundamental concepts of fairness in artificial intelligence, focusing on the differences and applications of disparate impact and equal opportunity. This quiz helps users understand key fairness metrics, real-world examples, and common misconceptions related to AI fairness and bias.

  1. Understanding Disparate Impact

    Which statement best describes disparate impact in the context of AI decision-making?

    1. A policy that affects all groups equally, regardless of outcome differences.
    2. A neutral policy that unintentionally results in adverse outcomes for a protected group.
    3. A model with perfect prediction accuracy.
    4. A system that requires equal treatment for every individual.

    Explanation: Disparate impact occurs when a neutral policy leads to a disproportionate negative effect on members of a protected group, even without intentional discrimination. The first option incorrectly suggests that disparate impact means no differences in outcomes. The third option refers to equal treatment, which is not always sufficient for fairness. The fourth option concerns prediction accuracy, not fairness.

  2. Defining Equal Opportunity

    In AI systems, what does equal opportunity generally require when making decisions for different groups?

    1. That the true positive rate is the same for each group.
    2. That error rates are minimized overall.
    3. That every individual receives the same outcome.
    4. That input data is processed identically for all.

    Explanation: Equal opportunity in AI typically means that qualified individuals from all groups have equal chance of being positively selected, represented by equal true positive rates. The first option confuses equal outcomes for all, which is not the same as equal opportunity. Minimizing overall error rates could still lead to unfairness, making the third option incorrect. The fourth option ignores outcome fairness and considers only process.

  3. Real-World Scenario: Loan Approvals

    If an AI model approves home loans at a lower rate for women despite similar qualifications, this situation illustrates:

    1. Data Overfitting
    2. Cross-Validation
    3. Incremental Learning
    4. Disparate Impact

    Explanation: A lower approval rate for women shows a disparate impact, where an apparently neutral process causes unintentional disadvantage for a protected group. Incremental learning is a machine learning technique unrelated to fairness. Data overfitting involves fitting noise, not bias. Cross-validation is a testing method, not a fairness concept.

  4. Focus of Equal Opportunity

    Which fairness metric is most directly concerned with ensuring equal chances for the qualified across groups in AI predictions?

    1. Disparate Treatment
    2. Equal Opportunity
    3. Accuracy Parity
    4. Feature Scaling

    Explanation: Equal opportunity directly emphasizes providing qualified individuals across groups with equal chances of correct positive outcomes. Accuracy parity refers to equal accuracy across groups but does not guarantee fairness for qualified individuals. Disparate treatment refers to intentional discrimination, not outcome parity. Feature scaling deals with numerical data preparation, not fairness.

  5. Identifying Disparate Impact in Practice

    What is a practical way to detect disparate impact in an employment selection AI tool?

    1. Reduce the tool’s overall computational complexity.
    2. Only check if the tool uses sensitive attributes in its input.
    3. Calculate whether selection rates for protected groups differ substantially.
    4. Ensure the tool always predicts a negative outcome.

    Explanation: Calculating outcome or selection rates by group helps identify if a seemingly fair process produces disparate impact. Checking if sensitive attributes are used misses the fact that proxies can cause bias as well. A tool always predicting negative outcomes is not a valid fairness check. Reducing complexity has no relation to fairness measurement.

  6. Misconception About Equal Opportunity

    Which of the following is NOT a requirement of equal opportunity in AI?

    1. Ensuring qualified individuals are treated equally by the model.
    2. Guaranteeing equal outcomes for every individual.
    3. Measuring positive outcomes by group membership.
    4. Having the same true positive rate across all groups.

    Explanation: Equal opportunity focuses on equal chances for similarly qualified individuals, not on forcing equal outcomes for all, making the third choice incorrect. Options one and two are central to equal opportunity. Measuring positive outcomes is relevant but does not define equal opportunity.

  7. Bias Sources in Disparate Impact

    Which source of bias can contribute to disparate impact in AI systems?

    1. Implementation of cross-validation procedures.
    2. Tuning hyperparameters for accuracy.
    3. Uneven historical data reflecting past discrimination.
    4. Accidental data duplication.

    Explanation: Disparate impact often arises when AI models are trained on biased historical data, reproducing inequities. Cross-validation helps assess model reliability, not fairness. Hyperparameter tuning tries to optimize accuracy, potentially ignoring fairness. Accidental data duplication generally leads to data quality problems, not specifically to disparate impact.

  8. Handling Disparate Impact

    What is one common strategy to reduce disparate impact in AI outcomes?

    1. Increasing the learning rate during model training.
    2. Ignoring all demographic attributes in the model.
    3. Rebalancing the training data to include more underrepresented groups.
    4. Minimizing the use of validation datasets.

    Explanation: Rebalancing data can help reduce the replication of historical biases and disparities in AI predictions. Ignoring demographic attributes is not enough, as bias can persist through indirect correlations. Adjusting the learning rate affects model convergence, not bias mitigation. Avoiding validation data does not address fairness at all.

  9. Evaluating Equal Opportunity with Example

    If an AI model's true positive rates for diagnosing a disease are equal among all racial groups, which fairness criterion does this model satisfy?

    1. Statistical Independence
    2. Random Forest
    3. Demographic Parity
    4. Equal Opportunity

    Explanation: Equal opportunity is satisfied when true positive rates—the chance of a qualified case receiving a positive outcome—are the same across groups. Demographic parity would focus on overall positive rates, not just for qualified individuals. Statistical independence is a broader statistical concept. Random forest is a machine learning algorithm, not a fairness metric.

  10. Disparate Impact vs Disparate Treatment

    How does disparate impact differ from disparate treatment in the context of AI fairness?

    1. Disparate treatment only applies to manual processes, not AI.
    2. Both terms have identical meanings in AI ethics.
    3. Disparate impact requires use of explicit demographic data.
    4. Disparate impact relates to unintentional effects, while disparate treatment involves intentional discrimination.

    Explanation: Disparate impact refers to unintentional harms affecting protected groups, while disparate treatment is about direct, intentional discrimination. Option two is incorrect because the concepts are distinct. The third choice is wrong as disparate treatment can apply to AI. The fourth is incorrect because explicit demographic data is not required; indirect bias can exist.