Explore the fundamentals of deep learning and neural networks with clear, accessible explanations focused on concepts rather than equations.
Which statement best describes the relationship between a neural network and deep learning?
Explanation: The correct answer highlights that a neural network is the structural framework, and deep learning is the intensive training process that makes it powerful using lots of data. Deep learning is not limited to single-layer networks (option B). Neural networks are designed to analyze data, not generate it (option C). Deep learning and neural networks are closely connected, not unrelated (option D).
What is the primary function of 'weights' in a neural network when making predictions?
Explanation: Weights control the importance of each input feature when computing the prediction. Randomly shuffling data (option B) is unrelated to weights, setting training speed (option C) is managed by learning rate, and storing outputs (option D) is not a function of weights.
When predicting whether a customer will default on a loan, what form does the neural network's output typically take?
Explanation: The network outputs a probability between 0 and 1 representing the likelihood of a specific outcome, such as loan default. Returning the input features (option B) does not provide predictions. Providing a fixed label (option C) omits the probabilistic nature, and option D is incorrect as outputs are meaningful probabilities, not random numbers.
What is the main purpose of the backpropagation step in training a neural network?
Explanation: Backpropagation identifies how each weight in the network influenced the error, allowing precise updates to reduce future mistakes. Random data selection (option B) and output layer changes (option C) are unrelated. Option D misrepresents the function, as backpropagation does not convert numbers into categories.
How does the learning rate affect the weight updates during neural network training?
Explanation: The learning rate sets the step size for updating weights. If it is too large, updates may overshoot; too small, and learning becomes slow. Ignoring inputs (option B) and setting layers (option C) are separate processes, and option D is unrelated to training.