Explore essential questions about artificial intelligence and human rights, focusing on ethical dilemmas like bias, privacy, accountability, and fairness. This quiz is designed to help users understand key challenges and responsibilities involved in using AI while safeguarding fundamental human rights.
Which of the following best describes a human rights concern related to AI bias in hiring tools?
Explanation: AI bias in hiring is a serious human rights issue because biased training data can cause systems to unfairly favor or disfavor specific groups. Assuming AI can never make mistakes is incorrect, as errors can still occur from flawed data or design. Claiming AI always ensures equal opportunity without review overlooks the need for ongoing evaluation. Thinking bias is limited to entertainment applications is false since recruitment and other fields are also affected.
Why is collecting personal data for AI applications a potential human rights dilemma?
Explanation: Requesting or acquiring personal data without permission can violate the fundamental human right to privacy. The belief that privacy concerns disappear if someone has 'nothing to hide' misunderstands the importance of consent. Not all AI uses only anonymous data, and sometimes identifying details are included. Personal data is typically more sensitive and requires greater protection than public information.
A person is denied a loan by an AI system with no explanation. Which human rights-related principle is most relevant here?
Explanation: When AI makes significant decisions, such as denying a loan, people have the right to understand how and why the decision was made. There's no universal right to cheap loans, so that option is incorrect. Automated decisions are not inherently fair or beyond review. Speed of service does not override the need for transparency in how decisions are reached.
What is a primary human rights concern with the use of AI in mass public surveillance?
Explanation: AI-powered mass surveillance can erode privacy and restrict free expression, posing risks to individual freedoms. Originating from misuse and overreach, it may target law-abiding people, not just criminals. The claim that AI surveillance is always harmless ignores possible abuse. Weather prediction is unrelated to surveillance, making that option irrelevant.
If AI causes harm due to incorrect decisions, which ethical dilemma best describes this situation?
Explanation: Determining responsibility for an AI's actions is essential, as harm caused by AI should not go unaddressed. Expecting AI to always fix its own errors or blaming only users ignores the systemic issues and need for oversight. Allowing AI to operate with no human supervision increases risks rather than solving the ethical dilemma.
Which example best illustrates discrimination by AI violating human rights?
Explanation: An AI system that shows uneven accuracy between groups can reinforce or create discrimination, threatening the principle of equality. The idea that AI never differentiates between people is incorrect, especially in recognition tasks. Claiming AI cannot process images is factually wrong. Dismissing misidentification as a mere technical glitch fails to recognize its potential impact on human rights.
How could AI-based content moderation systems create an ethical issue regarding freedom of expression?
Explanation: AI moderation may lack the ability to fully understand context, sometimes removing valid opinions and restricting freedom of speech. Humans are not the only ones who can limit expression; AI systems contribute as well. The scope of AI moderation goes beyond just spam. AI does not perfectly interpret context or intentions in all cases.
Why is informed consent important when collecting data for AI training?
Explanation: Informed consent helps protect individual rights by ensuring people understand and agree to the use of their information. Machines collecting data do not eliminate the need for consent. Transparency is still crucial, even when AI is involved. Informed consent is relevant in many fields, not just medical research.
If courts use AI tools to help with sentencing, what is a key human rights risk?
Explanation: AI algorithms trained on past judicial data can repeat or intensify historic biases, threatening the right to a fair trial. Relying solely on AI for fairness overlooks the importance of human judgment. There is no guarantee that AI is always more accurate than judges. AI is increasingly used in core legal decisions, not just minor tasks.
What human rights benefit can accessible AI technologies provide for people with disabilities?
Explanation: Accessible AI helps people with disabilities enjoy equal access to services, information, and opportunities, supporting their autonomy and inclusion. The claim that AI reduces rights is incorrect, as accessibility aims to improve things. Suggesting accessible AI is inherently less accurate is a misconception. It's also false that AI cannot support accessibility; in fact, it can make a significant positive difference.