AI Ethics Basics: Fairness, Bias, and Transparency Quiz

Test your understanding of AI ethics, focusing on fairness, bias, and transparency in artificial intelligence. This quiz helps you identify key concepts, challenges, and best practices for creating ethical and responsible AI systems.

  1. AI Bias Recognition

    Which of the following best describes bias in an AI system used for hiring candidates?

    1. The system always selects the most qualified person for every job.
    2. The system eliminates human error completely.
    3. The system favors certain groups due to patterns in its training data.
    4. The system does not require any data for making decisions.

    Explanation: Bias in AI often occurs when the training data contains patterns that favor certain groups over others, leading to unfair results. The idea that the system always selects the most qualified person ignores potential bias issues. AI does not eliminate human error entirely and can sometimes amplify biases. Lastly, AI systems require data to operate and cannot function without it.

  2. Understanding Fairness

    In the context of AI ethics, what does fairness mostly refer to when screening loan applications?

    1. Processing all applications more quickly.
    2. Prioritizing applicants from one region over another.
    3. Ensuring all applicants are treated equally regardless of race or gender.
    4. Allowing only past customers to apply for loans.

    Explanation: Fairness in AI typically means treating individuals equally and not letting characteristics like race or gender affect outcomes. Processing speed is not related to fairness. Limiting applicants to past customers excludes new applicants unfairly. Prioritizing by region can introduce regional bias rather than ensuring fairness.

  3. Transparency in AI Decisions

    Why is transparency important in AI decision-making, such as in medical diagnosis tools?

    1. It hides the decision process from users.
    2. It ensures the AI is always accurate.
    3. It allows humans to understand how and why decisions are made.
    4. It makes the software faster.

    Explanation: Transparency helps people understand the reasoning behind AI decisions, which is essential in sensitive fields like healthcare. Transparency does not guarantee accuracy or speed. Hiding the decision process is the opposite of transparency. Thus, understanding decisions is the main reason transparency is important.

  4. Example of Algorithmic Discrimination

    Which is an example of algorithmic discrimination in AI?

    1. A facial recognition system struggles more with recognizing people of certain ethnic backgrounds.
    2. A sorting algorithm organizes data alphabetically.
    3. An AI chatbot responds to every user in under one second.
    4. A weather prediction model forecasts temperature for a city.

    Explanation: Algorithmic discrimination occurs when an AI system works less well for certain groups, such as some ethnicities. The chatbot's speed is unrelated to discrimination. Weather forecasting and sorting data alphabetically do not concern disparate impact on different groups. Therefore, facial recognition bias is the clearest example.

  5. Data Collection and Bias

    What is a common source of unintentional bias in AI systems?

    1. Training data that does not represent the diversity of real-world users.
    2. Developers intentionally program unfair rules.
    3. All users always follow the instructions exactly.
    4. AI systems require no information about people.

    Explanation: AI systems can become biased if their training data is not diverse, even if there's no intention to discriminate. Most bias is not intentional by developers. Assuming all users follow rules exactly overlooks real-world diversity. AI systems almost always require some user data, making the last option incorrect.

  6. Addressing AI Bias

    Which of the following is an effective first step to address bias in an AI system?

    1. Ignore all user feedback.
    2. Carefully examine and diversify the system's training data.
    3. Remove transparency features from the algorithm.
    4. Use the same dataset for every problem.

    Explanation: Reviewing and diversifying data helps prevent bias by making the system more representative of all users. Ignoring feedback prevents improvement and learning. Using the same dataset does not address unique needs of different applications. Removing transparency makes identifying and solving bias issues harder.

  7. AI Ethics and Accountability

    Who should be responsible for ensuring ethical AI use in a school grading system powered by AI?

    1. No one is responsible.
    2. Both developers and those who implement the system.
    3. Only the students using the system.
    4. Only the AI system itself.

    Explanation: Both creators and implementers play important roles in keeping AI ethical and fair. Students usually do not control the system's development or management. AI systems cannot be responsible for their own ethics. Saying no one is responsible ignores the importance of accountability in ethical AI.

  8. Transparency Example

    Which scenario illustrates transparency in an AI-powered loan approval process?

    1. Applicants never learn how the decision was made.
    2. The approval process relies completely on chance.
    3. Applicants are given clear reasons why their loan was accepted or denied.
    4. The system provides no documentation or records.

    Explanation: Transparency means sharing understandable reasons for decisions, such as why a loan was granted or denied. Explaining nothing or relying on chance does not provide transparency. Not providing documentation keeps the process hidden, making it less transparent.

  9. Impact of Bias on Society

    Why can biased AI systems negatively affect society?

    1. They remove the need for human oversight.
    2. They guarantee that everyone will benefit equally from AI.
    3. They may make unfair decisions that disadvantage certain groups.
    4. They always create equal opportunities for everyone.

    Explanation: Biased AI can harm society by leading to unfair treatment of some groups, affecting access to jobs, loans, or services. Equal opportunities are not always guaranteed if bias is present. Human oversight is still needed to detect bias and errors. Not everyone benefits equally from AI without efforts to address potential bias.

  10. Understanding Bias Mitigation

    What is bias mitigation in AI, especially in automated resume screening?

    1. Allowing the algorithm to select candidates at random.
    2. Hiding all input data from human reviewers.
    3. Ignoring reports of bias from users.
    4. Taking steps to reduce unfair treatment of certain applicants.

    Explanation: Bias mitigation means actively reducing or eliminating sources of unfairness in AI, such as improving training data or adjusting algorithms. Random selection does not address bias issues. Hiding data and ignoring feedback prevent bias from being identified or corrected. Therefore, preventing unfair treatment is the key goal of bias mitigation.