Explore the fundamental principles of data ownership and consent…
Start QuizExplore essential questions about artificial intelligence and human rights,…
Start QuizExplore the fundamentals of algorithmic transparency and explainability with…
Start QuizChallenge your understanding of building responsible AI by exploring…
Start QuizExplore the key concepts of bias in machine learning…
Start QuizExplore key concepts and essential facts about international AI…
Start QuizExplore key concepts about how artificial intelligence and automation…
Start QuizExplore key ethical and legal concepts in the use…
Start QuizExplore essential ethical considerations surrounding generative AI, including bias,…
Start QuizExplore essential concepts of privacy and data protection as…
Start QuizExplore key aspects of accountability in AI systems, including…
Start QuizTest your understanding of AI ethics, focusing on fairness,…
Start QuizExplore fundamental concepts of fairness in artificial intelligence, focusing on the differences and applications of disparate impact and equal opportunity. This quiz helps users understand key fairness metrics, real-world examples, and common misconceptions related to AI fairness and bias.
This quiz contains 10 questions. Below is a complete reference of all questions, answer choices, and correct answers. You can use this section to review after taking the interactive quiz above.
Which statement best describes disparate impact in the context of AI decision-making?
Correct answer: A neutral policy that unintentionally results in adverse outcomes for a protected group.
Explanation: Disparate impact occurs when a neutral policy leads to a disproportionate negative effect on members of a protected group, even without intentional discrimination. The first option incorrectly suggests that disparate impact means no differences in outcomes. The third option refers to equal treatment, which is not always sufficient for fairness. The fourth option concerns prediction accuracy, not fairness.
In AI systems, what does equal opportunity generally require when making decisions for different groups?
Correct answer: That the true positive rate is the same for each group.
Explanation: Equal opportunity in AI typically means that qualified individuals from all groups have equal chance of being positively selected, represented by equal true positive rates. The first option confuses equal outcomes for all, which is not the same as equal opportunity. Minimizing overall error rates could still lead to unfairness, making the third option incorrect. The fourth option ignores outcome fairness and considers only process.
If an AI model approves home loans at a lower rate for women despite similar qualifications, this situation illustrates:
Correct answer: Disparate Impact
Explanation: A lower approval rate for women shows a disparate impact, where an apparently neutral process causes unintentional disadvantage for a protected group. Incremental learning is a machine learning technique unrelated to fairness. Data overfitting involves fitting noise, not bias. Cross-validation is a testing method, not a fairness concept.
Which fairness metric is most directly concerned with ensuring equal chances for the qualified across groups in AI predictions?
Correct answer: Equal Opportunity
Explanation: Equal opportunity directly emphasizes providing qualified individuals across groups with equal chances of correct positive outcomes. Accuracy parity refers to equal accuracy across groups but does not guarantee fairness for qualified individuals. Disparate treatment refers to intentional discrimination, not outcome parity. Feature scaling deals with numerical data preparation, not fairness.
What is a practical way to detect disparate impact in an employment selection AI tool?
Correct answer: Calculate whether selection rates for protected groups differ substantially.
Explanation: Calculating outcome or selection rates by group helps identify if a seemingly fair process produces disparate impact. Checking if sensitive attributes are used misses the fact that proxies can cause bias as well. A tool always predicting negative outcomes is not a valid fairness check. Reducing complexity has no relation to fairness measurement.
Which of the following is NOT a requirement of equal opportunity in AI?
Correct answer: Guaranteeing equal outcomes for every individual.
Explanation: Equal opportunity focuses on equal chances for similarly qualified individuals, not on forcing equal outcomes for all, making the third choice incorrect. Options one and two are central to equal opportunity. Measuring positive outcomes is relevant but does not define equal opportunity.
Which source of bias can contribute to disparate impact in AI systems?
Correct answer: Uneven historical data reflecting past discrimination.
Explanation: Disparate impact often arises when AI models are trained on biased historical data, reproducing inequities. Cross-validation helps assess model reliability, not fairness. Hyperparameter tuning tries to optimize accuracy, potentially ignoring fairness. Accidental data duplication generally leads to data quality problems, not specifically to disparate impact.
What is one common strategy to reduce disparate impact in AI outcomes?
Correct answer: Rebalancing the training data to include more underrepresented groups.
Explanation: Rebalancing data can help reduce the replication of historical biases and disparities in AI predictions. Ignoring demographic attributes is not enough, as bias can persist through indirect correlations. Adjusting the learning rate affects model convergence, not bias mitigation. Avoiding validation data does not address fairness at all.
If an AI model's true positive rates for diagnosing a disease are equal among all racial groups, which fairness criterion does this model satisfy?
Correct answer: Equal Opportunity
Explanation: Equal opportunity is satisfied when true positive rates—the chance of a qualified case receiving a positive outcome—are the same across groups. Demographic parity would focus on overall positive rates, not just for qualified individuals. Statistical independence is a broader statistical concept. Random forest is a machine learning algorithm, not a fairness metric.
How does disparate impact differ from disparate treatment in the context of AI fairness?
Correct answer: Disparate impact relates to unintentional effects, while disparate treatment involves intentional discrimination.
Explanation: Disparate impact refers to unintentional harms affecting protected groups, while disparate treatment is about direct, intentional discrimination. Option two is incorrect because the concepts are distinct. The third choice is wrong as disparate treatment can apply to AI. The fourth is incorrect because explicit demographic data is not required; indirect bias can exist.