Understanding Deep Learning
Which statement best describes deep learning and its difference from traditional machine learning?
- In deep learning, features must be fully defined before training the model.
- Deep learning is a method for only tabular datasets and requires manual feature design.
- Traditional machine learning and deep learning both use only single-layer neural networks.
- Deep learning uses neural networks with multiple layers to automatically extract features from data.
- Deep learning cannot process unstructured data, such as images or audio.
CNN Architecture Basics
What are the main building blocks of a convolutional neural network (CNN)?
- Only fully connected layers and dropout
- Recursion layers and matrix pooling
- Convolutional layers, pooling layers, fully connected layers, and activation functions
- Decision trees and random forests
- Activation-only layers and bias-only layers
Backpropagation Usage
How is the backpropagation algorithm used when training neural networks?
- By summing network layers forward
- By updating weights using the gradient of the loss function and propagating error backward
- By only considering the output layer for weight updates
- By ignoring gradients in hidden layers
- By randomly reinitializing weights at each step
Activation Functions Overview
Which activation function is commonly used in the hidden layers of deep neural networks due to its ability to mitigate the vanishing gradient problem?
- LeNet
- Linear
- Hardmax
- ReLU (Rectified Linear Unit)
- Softsign
Vanishing Gradient Solutions
What is one way to address the vanishing gradient problem in deep learning models?
- Increase the size of the test dataset only
- Always decrease learning rate to zero
- Avoid using activation functions
- Use ReLU activation functions in hidden layers
- Train on random noisy data only
Overfitting vs. Underfitting
If a neural network performs very well on training data but poorly on new, unseen data, what is this an example of?
- Underfitting
- Regularization
- Preprocessing
- Gradient overflow
- Overfitting
Regularization Techniques
Which of the following is a common regularization technique used in neural networks to prevent overfitting?
- Dropout
- Overlaping
- Forecasting
- Gradient extension
- Underweighting
RNN vs. Feedforward Networks
What enables recurrent neural networks (RNNs) to handle sequential data, unlike feedforward neural networks?
- RNNs have connections that form cycles and maintain internal state.
- RNNs only use fixed-length input for all tasks.
- RNNs remove all hidden layers to process faster.
- Feedforward networks train on larger datasets only.
- Feedforward networks use memory units for sequence prediction.
Purpose of Dropout
Why is dropout applied in training neural networks?
- To force all weights to zero after each epoch
- To randomly deactivate a fraction of neurons and encourage robust learning
- To enhance model accuracy by increasing neuron count per layer
- To remove irrelevant input features automatically
- To permanently remove layers from the network structure
Transfer Learning Application
How does transfer learning benefit deep learning models?
- By increasing the number of output neurons for each prediction
- By duplicating entire datasets for each new task
- By leveraging knowledge from one task to improve performance on a related but different task
- By discarding pre-trained models after initial training
- By always starting training from random weights only