AI and Human Rights: Ethical Dilemmas Quiz Quiz

Explore essential questions about artificial intelligence and human rights, focusing on ethical dilemmas like bias, privacy, accountability, and fairness. This quiz is designed to help users understand key challenges and responsibilities involved in using AI while safeguarding fundamental human rights.

  1. AI Bias and Fairness

    Which of the following best describes a human rights concern related to AI bias in hiring tools?

    1. Bias only affects entertainment applications, not recruitment.
    2. AI algorithms cannot make mistakes if programmed correctly.
    3. AI always ensures equal opportunity without review.
    4. AI systems may favor certain groups unfairly due to biased training data.

    Explanation: AI bias in hiring is a serious human rights issue because biased training data can cause systems to unfairly favor or disfavor specific groups. Assuming AI can never make mistakes is incorrect, as errors can still occur from flawed data or design. Claiming AI always ensures equal opportunity without review overlooks the need for ongoing evaluation. Thinking bias is limited to entertainment applications is false since recruitment and other fields are also affected.

  2. Right to Privacy

    Why is collecting personal data for AI applications a potential human rights dilemma?

    1. AI only uses anonymous data, so privacy is guaranteed.
    2. Data collection is never an issue if people have nothing to hide.
    3. Personal data is less sensitive than public data used by AI.
    4. Unauthorized data collection can infringe on an individual's right to privacy.

    Explanation: Requesting or acquiring personal data without permission can violate the fundamental human right to privacy. The belief that privacy concerns disappear if someone has 'nothing to hide' misunderstands the importance of consent. Not all AI uses only anonymous data, and sometimes identifying details are included. Personal data is typically more sensitive and requires greater protection than public information.

  3. Transparency in AI Decisions

    A person is denied a loan by an AI system with no explanation. Which human rights-related principle is most relevant here?

    1. The right to faster service regardless of transparency.
    2. Automated decisions are always fair and don't require review.
    3. The right to explanation and transparency in automated decision making.
    4. The right to cheap loans for everyone using AI.

    Explanation: When AI makes significant decisions, such as denying a loan, people have the right to understand how and why the decision was made. There's no universal right to cheap loans, so that option is incorrect. Automated decisions are not inherently fair or beyond review. Speed of service does not override the need for transparency in how decisions are reached.

  4. AI Surveillance

    What is a primary human rights concern with the use of AI in mass public surveillance?

    1. AI surveillance is always harmless and never misused.
    2. It can threaten individuals' freedoms by enabling constant monitoring.
    3. Surveillance with AI guarantees better weather predictions.
    4. Mass surveillance only affects criminals and is not a concern for others.

    Explanation: AI-powered mass surveillance can erode privacy and restrict free expression, posing risks to individual freedoms. Originating from misuse and overreach, it may target law-abiding people, not just criminals. The claim that AI surveillance is always harmless ignores possible abuse. Weather prediction is unrelated to surveillance, making that option irrelevant.

  5. Accountability in AI Systems

    If AI causes harm due to incorrect decisions, which ethical dilemma best describes this situation?

    1. Assuring all AI errors are the users' fault.
    2. Allowing AI to operate without any human oversight.
    3. Assuming AI can fix its own mistakes automatically.
    4. Finding who is accountable for the AI’s actions and decisions.

    Explanation: Determining responsibility for an AI's actions is essential, as harm caused by AI should not go unaddressed. Expecting AI to always fix its own errors or blaming only users ignores the systemic issues and need for oversight. Allowing AI to operate with no human supervision increases risks rather than solving the ethical dilemma.

  6. AI and Discrimination

    Which example best illustrates discrimination by AI violating human rights?

    1. AI is unable to process images, so discrimination cannot occur.
    2. A facial recognition tool misidentifies people from certain ethnic backgrounds more often.
    3. AI never makes distinctions between individuals.
    4. Any misidentification is simply a technical glitch with no human rights impact.

    Explanation: An AI system that shows uneven accuracy between groups can reinforce or create discrimination, threatening the principle of equality. The idea that AI never differentiates between people is incorrect, especially in recognition tasks. Claiming AI cannot process images is factually wrong. Dismissing misidentification as a mere technical glitch fails to recognize its potential impact on human rights.

  7. Freedom of Expression and AI Moderation

    How could AI-based content moderation systems create an ethical issue regarding freedom of expression?

    1. AI always understands the context behind every message.
    2. AI might mistakenly censor lawful speech, limiting free expression.
    3. Only human moderators can violate freedom of speech.
    4. AI moderation only affects spam content, not real opinions.

    Explanation: AI moderation may lack the ability to fully understand context, sometimes removing valid opinions and restricting freedom of speech. Humans are not the only ones who can limit expression; AI systems contribute as well. The scope of AI moderation goes beyond just spam. AI does not perfectly interpret context or intentions in all cases.

  8. Consent in AI Data Collection

    Why is informed consent important when collecting data for AI training?

    1. Consent is not needed if the data is collected by a machine.
    2. People should know and agree to how their data is used for fairness and autonomy.
    3. Informed consent applies only to medical research, not AI.
    4. As long as AI is involved, transparency is not required.

    Explanation: Informed consent helps protect individual rights by ensuring people understand and agree to the use of their information. Machines collecting data do not eliminate the need for consent. Transparency is still crucial, even when AI is involved. Informed consent is relevant in many fields, not just medical research.

  9. AI in Legal Decisions

    If courts use AI tools to help with sentencing, what is a key human rights risk?

    1. AI may unintentionally reinforce biases, affecting fair trials.
    2. Legal decisions made by AI are always more accurate than judges.
    3. AI tools are used only for minor court tasks.
    4. AI guarantees fairness by removing all human input.

    Explanation: AI algorithms trained on past judicial data can repeat or intensify historic biases, threatening the right to a fair trial. Relying solely on AI for fairness overlooks the importance of human judgment. There is no guarantee that AI is always more accurate than judges. AI is increasingly used in core legal decisions, not just minor tasks.

  10. AI and Accessibility

    What human rights benefit can accessible AI technologies provide for people with disabilities?

    1. They can expand opportunities by removing barriers to participation.
    2. AI reduces rights for people with disabilities by making things harder.
    3. Accessible AI is usually less accurate for everyone.
    4. AI cannot be used to help with accessibility needs.

    Explanation: Accessible AI helps people with disabilities enjoy equal access to services, information, and opportunities, supporting their autonomy and inclusion. The claim that AI reduces rights is incorrect, as accessibility aims to improve things. Suggesting accessible AI is inherently less accurate is a misconception. It's also false that AI cannot support accessibility; in fact, it can make a significant positive difference.