INTRODUCTION TO DEEP LEARNING- Neural Networks Quiz

Explore the fundamentals of neural networks in deep learning, including their structure, components, and applications in prediction tasks.

  1. Neural Network Basics

    Which statement best describes a simple neural network used for predicting house prices from input features?

    1. It is limited to memorizing the training data only.
    2. It can only produce negative output values.
    3. It consists of one or more layers of interconnected neurons.
    4. It requires manual assignment of hidden nodes.

    Explanation: A simple neural network comprises layers of interconnected neurons that process inputs to generate predictions. Neural networks are not limited to memorizing data; they generalize patterns. They can produce positive and negative outputs, though functions like ReLU are often used to ensure outputs are non-negative. The hidden nodes' functions are learned automatically rather than set manually.

  2. Role of Activation Functions

    What is the main purpose of using an activation function such as ReLU in a neural network?

    1. To introduce non-linearity into the model's predictions.
    2. To eliminate the need for training data.
    3. To ensure all outputs are exactly zero.
    4. To calculate the mean of the output values.

    Explanation: Activation functions like ReLU introduce non-linearity, allowing networks to learn complex patterns. Calculating means or setting outputs to zero are not functions of ReLU. Neural networks still require training data for learning; activation functions do not replace this need.

  3. Structure of a Neural Network

    In a densely connected neural network, how are the input features connected to the neurons in the first hidden layer?

    1. Only the largest input feature connects to the hidden layer.
    2. Neurons are connected in a random fashion.
    3. Each input feature connects to only one neuron.
    4. Every input feature is connected to every neuron in the hidden layer.

    Explanation: In a densely connected (fully connected) neural network, each input is connected to all neurons in the next layer. Connecting each input to only one neuron or randomly does not define dense connectivity. No input feature is prioritized solely based on value.

  4. Input and Output in Neural Networks

    When predicting the price of a house with a neural network, what typically represents the input and output?

    1. Input: only the zip code; Output: number of bedrooms
    2. Input: house features like size and bedrooms; Output: predicted house price
    3. Input: predicted house price; Output: house features
    4. Input: house images only; Output: postal codes

    Explanation: The input typically includes features describing the house (such as size, bedrooms), while the output is the predicted price. The other options reverse this relationship or mention irrelevant features for house price prediction.

  5. Learning in Neural Networks

    How does a neural network typically learn to make accurate predictions?

    1. By assigning random outputs every time
    2. By using only one data point and repeating it
    3. By adjusting internal parameters based on many training examples
    4. By presetting all weights without learning

    Explanation: A neural network learns through training on many examples, adjusting weights to minimize prediction error. Using only one data point or presetting weights prevents learning. Assigning random outputs does not constitute learning.