Ethical Challenges of Generative AI: Awareness Quiz Quiz

Explore essential ethical considerations surrounding generative AI, including bias, misinformation, privacy, and societal impacts. This quiz helps learners identify the key challenges and responsibilities involved in deploying AI-generated content responsibly.

  1. Bias in AI Outputs

    A generative AI model produces job application summaries that consistently favor one gender over another with similar qualifications. What is the main ethical concern in this scenario?

    1. Bias and unfair discrimination
    2. Data compression errors
    3. Network latency
    4. Technical malfunction

    Explanation: Bias and unfair discrimination occur when AI systems unfairly favor or disadvantage certain groups, such as genders in this case. Technical malfunction is not the root cause here, as the model works as designed but is biased due to training data or design choices. Data compression errors refer to data storage issues, which do not impact decision fairness. Network latency is a connectivity issue and isn't related to the fairness of AI outputs.

  2. Misinformation Spread

    If a generative AI is used to create realistic news articles that contain made-up information, what ethical risk does this present?

    1. Unauthorized patent sharing
    2. Slower processing speeds
    3. Spreading misinformation
    4. Enhanced data accuracy

    Explanation: Spreading misinformation is a key ethical risk because AI-generated content can be used to produce convincing but false narratives. Slower processing speeds are a technical issue, not an ethical one. Unauthorized patent sharing would involve intellectual property rather than truthfulness in content. Enhanced data accuracy is desirable but contradicts the scenario of generating false information.

  3. Plagiarism Risk

    When a generative AI creates content that closely mimics published works without acknowledgment, which ethical concern is most relevant?

    1. Financial auditing errors
    2. Plagiarism and copyright infringement
    3. Spam filtering
    4. Hardware overheating

    Explanation: Plagiarism and copyright infringement are critical if AI generates content that copies or mimics existing works without giving credit. Hardware overheating concerns physical servers, not content generation. Financial auditing errors pertain to accounting, not AI-generated materials. Spam filtering relates to blocking unwanted emails, not misappropriation of intellectual property.

  4. Privacy Implications

    A chatbot powered by generative AI asks users for their personal address and stores this data. Which ethical issue is raised here?

    1. Battery consumption
    2. Increased algorithm speed
    3. Less colorful output
    4. Violation of user privacy

    Explanation: Violation of user privacy occurs when AI collects or stores personal data without adequate safeguards. Increased algorithm speed is unrelated to the collection of sensitive information. Less colorful output describes a visual property, not privacy. Battery consumption is a device issue, not an ethical concern about data handling.

  5. Responsibility for AI Output

    Who holds responsibility if a generative AI unintentionally produces harmful or offensive content?

    1. Internet providers
    2. Content viewers
    3. Global weather agencies
    4. Developers and users

    Explanation: Developers and users are responsible for monitoring and managing AI output to prevent harm, as they design, deploy, and interact with the system. Internet providers do not control AI content creation directly. Content viewers are recipients, not originators. Global weather agencies are unrelated to AI-generated content.

  6. Deepfake Challenges

    An AI generates videos of real people saying things they never said, making it hard to distinguish what is real. What ethical challenge is this?

    1. Deepfake deception
    2. Rapid file transfer
    3. Low image resolution
    4. Program installation failure

    Explanation: Deepfake deception refers to AI-generated media that can convincingly portray real people in fabricated situations, presenting ethical risks like fraud or reputational harm. Low image resolution is a quality issue, not an ethical one. Program installation failure pertains to software deployment, not media authenticity. Rapid file transfer is unrelated to content truthfulness.

  7. Data Source Transparency

    Why is transparency about the data used to train generative AI important from an ethical perspective?

    1. It boosts advertising revenue
    2. It guarantees legal immunity
    3. It helps users assess reliability and bias
    4. It increases overall system speed

    Explanation: Transparency allows users and stakeholders to evaluate the fairness, reliability, and potential biases in AI models. Increased system speed is unrelated to data transparency. Boosting advertising revenue is a commercial motive, not an ethical rationale. Transparency does not guarantee legal immunity, which depends on laws and regulations.

  8. AI Use in Education

    If a generative AI writes an entire school essay for a student, what ethical concern arises in this context?

    1. Academic dishonesty
    2. Nutritional imbalance
    3. Battery optimization
    4. Weather forecasting

    Explanation: Academic dishonesty occurs when students submit work generated by AI as their own, undermining fair evaluation and learning. Battery optimization deals with energy use in devices. Weather forecasting is unrelated to student assignments. Nutritional imbalance concerns health, not educational practices.

  9. Unintended Consequences

    What is a key ethical challenge when deploying generative AI in social media without clear safeguards?

    1. Mobile ringtone selection
    2. Spread of harmful content
    3. Instant messenger syncing
    4. Software version upgrades

    Explanation: Without safeguards, generative AI on social media can easily create and amplify harmful content, such as hate speech or fake news. Instant messenger syncing, software upgrades, and ringtone selection are standard user or device features unrelated to ethical concerns about content spread.

  10. Consent to Data Use

    Which ethical challenge is present if generative AI uses private messages or images to train itself without people knowing?

    1. Increased download speed
    2. Image compression failure
    3. Lack of informed consent
    4. Lighting adjustment

    Explanation: Using personal data without permission for AI training violates the principle of informed consent, which is key for respecting privacy and autonomy. Image compression failure is a technical error not tied to user agreements. Download speed and lighting adjustment are unrelated to data rights or consent.