Generative AI Fundamentals Quiz Quiz

Test your understanding of generative artificial intelligence principles with these easy questions. This quiz covers the basics of neural networks, generative models, activation functions, AI learning methods, and foundational concepts in generative AI.

  1. Artificial Neural Network Basics

    Which part of an artificial neural network is responsible for receiving the raw input data?

    1. Hidden Layer
    2. Activation Function
    3. Input Layer
    4. Output Layer

    Explanation: The input layer of a neural network receives the raw data features for further processing. The output layer provides the final predictions, while hidden layers transform and extract features. An activation function adds non-linearity but is not a physical 'layer' itself.

  2. Neural Network Layers

    In a standard feedforward neural network, what is the main role of hidden layers?

    1. Store dataset labels
    2. Extract features and perform computations
    3. Display final predictions
    4. Present raw data to the network

    Explanation: Hidden layers in neural networks extract relevant features from the data and perform intermediate computations. The input layer presents raw data, not features. The output layer displays predictions, but does not compute intermediate patterns. Labels are part of the dataset, not the network architecture.

  3. Function of Activation Functions

    Why are activation functions used in neural networks, such as ReLU or Sigmoid?

    1. To introduce non-linearity and enable learning complex patterns
    2. To decrease model accuracy
    3. To store training results
    4. To sort the input data

    Explanation: Activation functions add non-linearity so networks can learn complex, non-linear relationships. They do not store results, decrease accuracy, or sort data. Without non-linearity, networks behave like simple linear models and cannot solve complex tasks.

  4. Generative vs Discriminative

    Which key difference separates generative models from discriminative models?

    1. Generative models generate new data similar to the training set, while discriminative models focus on classifying inputs
    2. Discriminative models generate images, generative models do not
    3. Generative models are not trained with data
    4. Discriminative models cannot be used for classification

    Explanation: Generative models learn to produce data resembling the training distribution, while discriminative models focus on classifying or labeling inputs. Discriminative models do not generate new data, so option B is incorrect. Both types of models are trained with data, and discriminative models are especially used for classification.

  5. Example of a Generative Model

    Which of the following is commonly used as a generative model in AI?

    1. k-Means Clustering
    2. Generative Adversarial Network (GAN)
    3. Decision Tree
    4. Support Vector Machine

    Explanation: A Generative Adversarial Network (GAN) is a popular generative model for producing images, text, or other data. Decision Trees and Support Vector Machines are discriminative models. k-Means is a clustering algorithm, not a generative model.

  6. Neural Network Architecture

    In a neural network, what do we call the parameters that are adjusted during training to minimize errors?

    1. Layers
    2. Neurons
    3. Activations
    4. Weights

    Explanation: Weights are the parameters updated during training in a neural network to reduce the error between predicted and true outputs. Neurons and layers describe the structural components of the network. Activations are the outputs of neurons after applying activation functions.

  7. Backpropagation Method

    Which training algorithm is widely used in neural networks to update the weights based on output error?

    1. Clustering
    2. Backpropagation
    3. Data Augmentation
    4. Principal Component Analysis

    Explanation: Backpropagation calculates gradients of errors and updates the weights to reduce future errors. Data augmentation changes training inputs for robustness, principal component analysis is used for dimensionality reduction, and clustering groups similar data but does not update weights for prediction.

  8. Purpose of the Output Layer

    What is the main function of the output layer in a neural network?

    1. Introducing randomness
    2. Receiving raw inputs
    3. Producing the final prediction or classification result
    4. Performing feature extraction

    Explanation: The output layer generates the network's final prediction or classification based on preceding computations. Feature extraction occurs mainly in hidden layers. The output layer does not introduce randomness or receive input data directly.

  9. Gradient Descent Goal

    What is the main goal of using gradient descent during neural network training?

    1. To increase the number of data samples
    2. To minimize the error between predicted and actual outputs
    3. To randomly shuffle labels
    4. To change the activation function type

    Explanation: Gradient descent updates model weights to reduce prediction errors over the training data. It does not increase dataset size, modify activation functions, or shuffle labels. Its focus is efficient optimization of network parameters.

  10. Supervised vs Unsupervised Learning

    In supervised learning, what kind of data does a generative model receive during training?

    1. Input data only with no labels
    2. Input-output pairs with correct labels
    3. Output data only
    4. Unlabeled and unstructured data

    Explanation: Supervised learning involves training with both input data and the correct output labels for each example. Unsupervised learning, in contrast, uses only input data with no labels. Providing just outputs or unlabeled, unstructured data does not define supervised settings.

  11. Activation Function Example

    Which one of the following functions is a common activation function in neural networks?

    1. Sigmoid
    2. Linear Regression
    3. Mean Squared Error
    4. Gradient Boosting

    Explanation: Sigmoid is a widely used activation function that transforms input values into a range between 0 and 1. Linear regression and gradient boosting are types of models, not activation functions. Mean squared error is a loss function, not an activation function.

  12. Generative Model Application

    What is a typical application of a generative model?

    1. Creating new images that look similar to a training set
    2. Encoding data for compression
    3. Calculating average test scores
    4. Sorting numerical values in a dataset

    Explanation: Generative models are often used to create new data samples, such as images, that resemble the training data. Sorting or calculating averages are general data processing tasks, while encoding is more related to compression, not generation.

  13. Neural Network Learning

    What happens in a neural network when it 'learns' from data?

    1. New training data is created by the network itself
    2. Its weights are updated to reduce prediction errors over time
    3. Previous outputs are repeated as new predictions
    4. Its structure changes by removing nodes automatically

    Explanation: Learning in neural networks means adapting internal weights based on errors to improve predictions. Structure changing or data creation are not core learning steps. Repeating previous outputs does not help a network learn from new data.

  14. Input Data Processing

    Why must raw input data often be scaled or normalized before training a neural network?

    1. To increase the network's memory usage
    2. To add biases to outputs
    3. To hide input features
    4. To ensure all input features contribute equally to learning

    Explanation: Scaling or normalizing input makes sure that features with larger values do not dominate the learning process. Increasing memory, adding bias, or hiding features are not goals of data normalization before training.

  15. Loss Function in Neural Networks

    What does a loss function measure in a neural network?

    1. Weight initialization technique
    2. Input data size
    3. The number of layers in the network
    4. The difference between the network's predictions and the correct outputs

    Explanation: A loss function quantifies how well the predictions match the correct outputs, guiding training. Number of layers, data size, and initialization methods are not measured by the loss function.

  16. Purpose of Training Data

    Why is a diverse training dataset important when building generative AI models?

    1. It helps the model generalize to new, unseen samples
    2. It adds repetitive examples
    3. It causes errors to increase
    4. It makes the model slower

    Explanation: A diverse dataset helps the model learn broader patterns and generalize to data it has not seen. Increased speed and repetitive examples do not improve performance, and errors usually decrease with diverse, meaningful data rather than increase.