Introduction to Neural Networks — Beginner level Quiz

Explore key concepts of neural networks, including their structure, activation functions, and loss metrics. This beginner-friendly quiz helps solidify the foundations of artificial neural networks in machine learning.

  1. Neural Network Structure

    What are the three main types of layers in a standard artificial neural network?

    1. Input layer, hidden layer(s), output layer
    2. Data layer, computation layer, prediction layer
    3. Hyperlayer, middle layer, end layer
    4. Entry layer, processing unit, result generator

    Explanation: Neural networks are commonly structured with an input layer to receive data, one or more hidden layers for computation, and an output layer for predictions. The other options use incorrect or non-standard terminology not typically used in describing neural network architecture.

  2. Dense Layered Neural Networks

    What is a dense neural network?

    1. A network without any hidden layers
    2. A network with multiple input layers
    3. A network where outputs are unconnected to previous layers
    4. A network where every unit in one layer connects to every unit in the next layer

    Explanation: A dense neural network, also called fully connected, has every neuron in a layer connected to every neuron in the following layer. Multiple input layers and unconnected outputs do not define dense networks, and all practical neural networks have at least one hidden or output layer.

  3. Role of Activation Functions

    Why are non-linear activation functions used in neural networks?

    1. They enable the network to approximate complex functions
    2. They force all outputs to be zero
    3. They make the training process slower
    4. They restrict the network to only simple predictions

    Explanation: Non-linear activation functions allow neural networks to model complex patterns in data, enabling advanced predictions. While training can be affected by activation choice, making training slower is not the purpose. Linear functions limit complexity, and non-linear functions do not necessarily force outputs to zero.

  4. Loss Functions in Regression

    Which loss function is less sensitive to outliers than Mean Squared Error (MSE) in regression tasks?

    1. Huber loss
    2. Binary cross-entropy
    3. Softmax loss
    4. Mean squared error (MSE)

    Explanation: Huber loss combines the robustness of MAE and the sensitivity of MSE, making it less sensitive to outliers than MSE alone. Binary cross-entropy and softmax loss are generally used for classification, and MSE remains more affected by large errors.

  5. Understanding Gradients

    What is the primary purpose of gradients in training neural networks?

    1. To determine the final output accuracy directly
    2. To increase data size
    3. To limit the number of network layers
    4. To guide the optimization of network weights

    Explanation: Gradients indicate how much each weight should be adjusted to minimize loss during training. They do not determine output accuracy directly, nor do they limit layer count or increase data size.