Understanding Perceptrons
Which of the following best describes what a single-layer perceptron does with input features?
- It computes a weighted sum followed by an activation function.
- It stores previous output values for comparison.
- It generates random outputs regardless of input.
- It multiplies the inputs without any additional processing.
- It only sorts the input values.
Activation Functions
When using the perceptron algorithm, what is the typical purpose of the activation function such as the step function?
- To convert the output into a binary value like 0 or 1.
- To randomly shuffle the input features.
- To expand the number of input layers.
- To calculate the loss between inputs.
- To deactivate some neurons in the network.
Learning Rule
In a perceptron, what typically happens to the weights during the learning process when a prediction is wrong?
- Weights are updated to reduce future errors.
- Weights are deleted from the model.
- Weights remain unchanged.
- Weights are randomized every time.
- Weights multiply by the output value.
Limitations of Perceptrons
Which type of problem cannot be solved by a single-layer perceptron, such as the classic XOR logic gate?
- Non-linearly separable problems.
- Linearly separable problems.
- Problems with only one input.
- Problems with numeric outputs.
- Problems that include negative numbers.
Introducing MLP
Which feature distinguishes a multi-layer perceptron (MLP) from a single-layer perceptron?
- It contains one or more hidden layers.
- It only has output and input layers.
- It never uses an activation function.
- It always has fewer neurons than a perceptron.
- It does not require inputs to work.
Backpropagation
In a multi-layer perceptron, which algorithm is commonly used to train the network by adjusting the weights in all layers?
- Backpropagation
- Backproliferation
- Backprofile
- Forward chaining
- Output splitting
MLP Output
If an MLP receives two inputs and has a single output neuron, what type of problem could it be used for?
- Binary classification
- Text summarization
- Image compression
- Signal encryption
- Database indexing
Hidden Layers Role
What is the main benefit of hidden layers in an MLP when learning complex patterns?
- They allow the network to capture non-linear relationships.
- They reduce the input size to zero.
- They prevent the network from learning.
- They only speed up calculations but don’t affect outputs.
- They make outputs random.
MLP Example
Suppose an MLP is trained to recognize cats and dogs in images. What kind of output would a two-output-neuron MLP provide for a given image?
- A score for each class, such as [cat: 0.8, dog: 0.2]
- A sequence of random numbers.
- Just a binary code with no meaning.
- An error message every time.
- Only the input image repeated.
Choosing the Right Network
Why would you choose a multi-layer perceptron instead of a single-layer perceptron for most real-world problems?
- Because MLPs can solve complex, non-linear problems that single-layer perceptrons cannot.
- Because MLPs always use less memory.
- Because single-layer perceptrons are faster for complex tasks.
- Because single-layer perceptrons learn non-linear patterns efficiently.
- Because MLPs do not require any data to function.