Deep Learning Interview Essentials Quiz

  1. Understanding Deep Learning

    Which statement best describes deep learning and its difference from traditional machine learning?

    1. In deep learning, features must be fully defined before training the model.
    2. Deep learning is a method for only tabular datasets and requires manual feature design.
    3. Traditional machine learning and deep learning both use only single-layer neural networks.
    4. Deep learning uses neural networks with multiple layers to automatically extract features from data.
    5. Deep learning cannot process unstructured data, such as images or audio.
  2. CNN Architecture Basics

    What are the main building blocks of a convolutional neural network (CNN)?

    1. Only fully connected layers and dropout
    2. Recursion layers and matrix pooling
    3. Convolutional layers, pooling layers, fully connected layers, and activation functions
    4. Decision trees and random forests
    5. Activation-only layers and bias-only layers
  3. Backpropagation Usage

    How is the backpropagation algorithm used when training neural networks?

    1. By summing network layers forward
    2. By updating weights using the gradient of the loss function and propagating error backward
    3. By only considering the output layer for weight updates
    4. By ignoring gradients in hidden layers
    5. By randomly reinitializing weights at each step
  4. Activation Functions Overview

    Which activation function is commonly used in the hidden layers of deep neural networks due to its ability to mitigate the vanishing gradient problem?

    1. LeNet
    2. Linear
    3. Hardmax
    4. ReLU (Rectified Linear Unit)
    5. Softsign
  5. Vanishing Gradient Solutions

    What is one way to address the vanishing gradient problem in deep learning models?

    1. Increase the size of the test dataset only
    2. Always decrease learning rate to zero
    3. Avoid using activation functions
    4. Use ReLU activation functions in hidden layers
    5. Train on random noisy data only
  6. Overfitting vs. Underfitting

    If a neural network performs very well on training data but poorly on new, unseen data, what is this an example of?

    1. Underfitting
    2. Regularization
    3. Preprocessing
    4. Gradient overflow
    5. Overfitting
  7. Regularization Techniques

    Which of the following is a common regularization technique used in neural networks to prevent overfitting?

    1. Dropout
    2. Overlaping
    3. Forecasting
    4. Gradient extension
    5. Underweighting
  8. RNN vs. Feedforward Networks

    What enables recurrent neural networks (RNNs) to handle sequential data, unlike feedforward neural networks?

    1. RNNs have connections that form cycles and maintain internal state.
    2. RNNs only use fixed-length input for all tasks.
    3. RNNs remove all hidden layers to process faster.
    4. Feedforward networks train on larger datasets only.
    5. Feedforward networks use memory units for sequence prediction.
  9. Purpose of Dropout

    Why is dropout applied in training neural networks?

    1. To force all weights to zero after each epoch
    2. To randomly deactivate a fraction of neurons and encourage robust learning
    3. To enhance model accuracy by increasing neuron count per layer
    4. To remove irrelevant input features automatically
    5. To permanently remove layers from the network structure
  10. Transfer Learning Application

    How does transfer learning benefit deep learning models?

    1. By increasing the number of output neurons for each prediction
    2. By duplicating entire datasets for each new task
    3. By leveraging knowledge from one task to improve performance on a related but different task
    4. By discarding pre-trained models after initial training
    5. By always starting training from random weights only