Generative AI Fundamentals Quiz — Questions & Answers

Test your understanding of generative artificial intelligence principles with these easy questions. This quiz covers the basics of neural networks, generative models, activation functions, AI learning methods, and foundational concepts in generative AI.

This quiz contains 16 questions. Below is a complete reference of all questions, answer choices, and correct answers. You can use this section to review after taking the interactive quiz above.

  1. Question 1: Artificial Neural Network Basics

    Which part of an artificial neural network is responsible for receiving the raw input data?

    • Hidden Layer
    • Activation Function
    • Input Layer
    • Output Layer
    Show correct answer

    Correct answer: Input Layer

    Explanation: The input layer of a neural network receives the raw data features for further processing. The output layer provides the final predictions, while hidden layers transform and extract features. An activation function adds non-linearity but is not a physical 'layer' itself.

  2. Question 2: Neural Network Layers

    In a standard feedforward neural network, what is the main role of hidden layers?

    • Store dataset labels
    • Extract features and perform computations
    • Display final predictions
    • Present raw data to the network
    Show correct answer

    Correct answer: Extract features and perform computations

    Explanation: Hidden layers in neural networks extract relevant features from the data and perform intermediate computations. The input layer presents raw data, not features. The output layer displays predictions, but does not compute intermediate patterns. Labels are part of the dataset, not the network architecture.

  3. Question 3: Function of Activation Functions

    Why are activation functions used in neural networks, such as ReLU or Sigmoid?

    • To introduce non-linearity and enable learning complex patterns
    • To decrease model accuracy
    • To store training results
    • To sort the input data
    Show correct answer

    Correct answer: To introduce non-linearity and enable learning complex patterns

    Explanation: Activation functions add non-linearity so networks can learn complex, non-linear relationships. They do not store results, decrease accuracy, or sort data. Without non-linearity, networks behave like simple linear models and cannot solve complex tasks.

  4. Question 4: Generative vs Discriminative

    Which key difference separates generative models from discriminative models?

    • Generative models generate new data similar to the training set, while discriminative models focus on classifying inputs
    • Discriminative models generate images, generative models do not
    • Generative models are not trained with data
    • Discriminative models cannot be used for classification
    Show correct answer

    Correct answer: Generative models generate new data similar to the training set, while discriminative models focus on classifying inputs

    Explanation: Generative models learn to produce data resembling the training distribution, while discriminative models focus on classifying or labeling inputs. Discriminative models do not generate new data, so option B is incorrect. Both types of models are trained with data, and discriminative models are especially used for classification.

  5. Question 5: Example of a Generative Model

    Which of the following is commonly used as a generative model in AI?

    • k-Means Clustering
    • Generative Adversarial Network (GAN)
    • Decision Tree
    • Support Vector Machine
    Show correct answer

    Correct answer: Generative Adversarial Network (GAN)

    Explanation: A Generative Adversarial Network (GAN) is a popular generative model for producing images, text, or other data. Decision Trees and Support Vector Machines are discriminative models. k-Means is a clustering algorithm, not a generative model.

  6. Question 6: Neural Network Architecture

    In a neural network, what do we call the parameters that are adjusted during training to minimize errors?

    • Layers
    • Neurons
    • Activations
    • Weights
    Show correct answer

    Correct answer: Weights

    Explanation: Weights are the parameters updated during training in a neural network to reduce the error between predicted and true outputs. Neurons and layers describe the structural components of the network. Activations are the outputs of neurons after applying activation functions.

  7. Question 7: Backpropagation Method

    Which training algorithm is widely used in neural networks to update the weights based on output error?

    • Clustering
    • Backpropagation
    • Data Augmentation
    • Principal Component Analysis
    Show correct answer

    Correct answer: Backpropagation

    Explanation: Backpropagation calculates gradients of errors and updates the weights to reduce future errors. Data augmentation changes training inputs for robustness, principal component analysis is used for dimensionality reduction, and clustering groups similar data but does not update weights for prediction.

  8. Question 8: Purpose of the Output Layer

    What is the main function of the output layer in a neural network?

    • Introducing randomness
    • Receiving raw inputs
    • Producing the final prediction or classification result
    • Performing feature extraction
    Show correct answer

    Correct answer: Producing the final prediction or classification result

    Explanation: The output layer generates the network's final prediction or classification based on preceding computations. Feature extraction occurs mainly in hidden layers. The output layer does not introduce randomness or receive input data directly.

  9. Question 9: Gradient Descent Goal

    What is the main goal of using gradient descent during neural network training?

    • To increase the number of data samples
    • To minimize the error between predicted and actual outputs
    • To randomly shuffle labels
    • To change the activation function type
    Show correct answer

    Correct answer: To minimize the error between predicted and actual outputs

    Explanation: Gradient descent updates model weights to reduce prediction errors over the training data. It does not increase dataset size, modify activation functions, or shuffle labels. Its focus is efficient optimization of network parameters.

  10. Question 10: Supervised vs Unsupervised Learning

    In supervised learning, what kind of data does a generative model receive during training?

    • Input data only with no labels
    • Input-output pairs with correct labels
    • Output data only
    • Unlabeled and unstructured data
    Show correct answer

    Correct answer: Input-output pairs with correct labels

    Explanation: Supervised learning involves training with both input data and the correct output labels for each example. Unsupervised learning, in contrast, uses only input data with no labels. Providing just outputs or unlabeled, unstructured data does not define supervised settings.

  11. Question 11: Activation Function Example

    Which one of the following functions is a common activation function in neural networks?

    • Sigmoid
    • Linear Regression
    • Mean Squared Error
    • Gradient Boosting
    Show correct answer

    Correct answer: Sigmoid

    Explanation: Sigmoid is a widely used activation function that transforms input values into a range between 0 and 1. Linear regression and gradient boosting are types of models, not activation functions. Mean squared error is a loss function, not an activation function.

  12. Question 12: Generative Model Application

    What is a typical application of a generative model?

    • Creating new images that look similar to a training set
    • Encoding data for compression
    • Calculating average test scores
    • Sorting numerical values in a dataset
    Show correct answer

    Correct answer: Creating new images that look similar to a training set

    Explanation: Generative models are often used to create new data samples, such as images, that resemble the training data. Sorting or calculating averages are general data processing tasks, while encoding is more related to compression, not generation.

  13. Question 13: Neural Network Learning

    What happens in a neural network when it 'learns' from data?

    • New training data is created by the network itself
    • Its weights are updated to reduce prediction errors over time
    • Previous outputs are repeated as new predictions
    • Its structure changes by removing nodes automatically
    Show correct answer

    Correct answer: Its weights are updated to reduce prediction errors over time

    Explanation: Learning in neural networks means adapting internal weights based on errors to improve predictions. Structure changing or data creation are not core learning steps. Repeating previous outputs does not help a network learn from new data.

  14. Question 14: Input Data Processing

    Why must raw input data often be scaled or normalized before training a neural network?

    • To increase the network's memory usage
    • To add biases to outputs
    • To hide input features
    • To ensure all input features contribute equally to learning
    Show correct answer

    Correct answer: To ensure all input features contribute equally to learning

    Explanation: Scaling or normalizing input makes sure that features with larger values do not dominate the learning process. Increasing memory, adding bias, or hiding features are not goals of data normalization before training.

  15. Question 15: Loss Function in Neural Networks

    What does a loss function measure in a neural network?

    • Weight initialization technique
    • Input data size
    • The number of layers in the network
    • The difference between the network's predictions and the correct outputs
    Show correct answer

    Correct answer: The difference between the network's predictions and the correct outputs

    Explanation: A loss function quantifies how well the predictions match the correct outputs, guiding training. Number of layers, data size, and initialization methods are not measured by the loss function.

  16. Question 16: Purpose of Training Data

    Why is a diverse training dataset important when building generative AI models?

    • It helps the model generalize to new, unseen samples
    • It adds repetitive examples
    • It causes errors to increase
    • It makes the model slower
    Show correct answer

    Correct answer: It helps the model generalize to new, unseen samples

    Explanation: A diverse dataset helps the model learn broader patterns and generalize to data it has not seen. Increased speed and repetitive examples do not improve performance, and errors usually decrease with diverse, meaningful data rather than increase.