Explore the fundamentals of neural networks in deep learning, including their structure, components, and applications in prediction tasks.
Which statement best describes a simple neural network used for predicting house prices from input features?
Explanation: A simple neural network comprises layers of interconnected neurons that process inputs to generate predictions. Neural networks are not limited to memorizing data; they generalize patterns. They can produce positive and negative outputs, though functions like ReLU are often used to ensure outputs are non-negative. The hidden nodes' functions are learned automatically rather than set manually.
What is the main purpose of using an activation function such as ReLU in a neural network?
Explanation: Activation functions like ReLU introduce non-linearity, allowing networks to learn complex patterns. Calculating means or setting outputs to zero are not functions of ReLU. Neural networks still require training data for learning; activation functions do not replace this need.
In a densely connected neural network, how are the input features connected to the neurons in the first hidden layer?
Explanation: In a densely connected (fully connected) neural network, each input is connected to all neurons in the next layer. Connecting each input to only one neuron or randomly does not define dense connectivity. No input feature is prioritized solely based on value.
When predicting the price of a house with a neural network, what typically represents the input and output?
Explanation: The input typically includes features describing the house (such as size, bedrooms), while the output is the predicted price. The other options reverse this relationship or mention irrelevant features for house price prediction.
How does a neural network typically learn to make accurate predictions?
Explanation: A neural network learns through training on many examples, adjusting weights to minimize prediction error. Using only one data point or presetting weights prevents learning. Assigning random outputs does not constitute learning.