Explore key concepts of neural networks, including their structure, activation functions, and loss metrics. This beginner-friendly quiz helps solidify the foundations of artificial neural networks in machine learning.
What are the three main types of layers in a standard artificial neural network?
Explanation: Neural networks are commonly structured with an input layer to receive data, one or more hidden layers for computation, and an output layer for predictions. The other options use incorrect or non-standard terminology not typically used in describing neural network architecture.
What is a dense neural network?
Explanation: A dense neural network, also called fully connected, has every neuron in a layer connected to every neuron in the following layer. Multiple input layers and unconnected outputs do not define dense networks, and all practical neural networks have at least one hidden or output layer.
Why are non-linear activation functions used in neural networks?
Explanation: Non-linear activation functions allow neural networks to model complex patterns in data, enabling advanced predictions. While training can be affected by activation choice, making training slower is not the purpose. Linear functions limit complexity, and non-linear functions do not necessarily force outputs to zero.
Which loss function is less sensitive to outliers than Mean Squared Error (MSE) in regression tasks?
Explanation: Huber loss combines the robustness of MAE and the sensitivity of MSE, making it less sensitive to outliers than MSE alone. Binary cross-entropy and softmax loss are generally used for classification, and MSE remains more affected by large errors.
What is the primary purpose of gradients in training neural networks?
Explanation: Gradients indicate how much each weight should be adjusted to minimize loss during training. They do not determine output accuracy directly, nor do they limit layer count or increase data size.