Challenge your understanding of deep learning model building and training using the Keras API, with focused questions on layers, sequential and functional models, and core training concepts. Perfect for learners seeking to strengthen their grasp of foundational Keras features and best practices in neural network development.
Which of the following is a core requirement when creating a custom layer using the Keras API?
Explanation: To create a custom layer in the Keras API, you must inherit from the Layer class so you can implement your own logic and methods. Defining an optimizer is only needed during model compilation, not in a layer. Calling compile inside the layer is incorrect; compilation belongs at the model level. Returning a loss during initialization is not required; layers deal with computations, not losses.
In the Keras API Sequential model, what is a valid scenario for its use?
Explanation: The Sequential model is specifically designed for situations where layers are arranged in a linear stack, each feeding directly into the next. Multiple input branches require the Functional API, not the Sequential model. Shared layers and importing only pre-trained weights are advanced use cases not suitable for the basic Sequential setup.
What is the primary role of a Dense layer in a neural network built with the Keras API?
Explanation: A Dense layer performs a fully connected transformation, connecting every input unit to every output unit through learnable weights. Dropout is used to reduce overfitting, not Dense. Convolutional layers handle spatial data, while normalization layers help with generalization.
Which advantage does the Keras API Functional model offer over the Sequential model?
Explanation: The Functional API is more flexible, supporting models with multiple inputs, outputs, and shared layers. Strictly linear stacking is a feature of the Sequential model, not Functional. The Functional API can be more verbose for simple models. Both APIs allow custom loss functions.
What must you specify during model compilation with the Keras API?
Explanation: You need to specify the loss function, optimizer, and metrics during model compilation to prepare the model for training. Activations are defined with individual layers, not during compilation. Input shapes are set in the architecture, and the number of hidden layers is decided during model definition.
Why is the Input layer essential when building models using the Keras Functional API?
Explanation: The Input layer is crucial in the Functional API as it specifies the shape and datatype of the input, serving as the entry point for the model. Learning rate schedules are handled separately via callbacks. The Input layer does not initialize output layer weights or determine training epochs.
Which activation function would you commonly use for binary classification outputs?
Explanation: Sigmoid activation is standard for binary outputs, squeezing results between 0 and 1 for probability interpretation. Relu is more often used in hidden layers. Softmax serves for multi-class outputs, and tanh outputs values between -1 and 1, unsuitable for binary classification outputs.
Which Keras layer helps prevent overfitting by randomly disabling a fraction of its input units during training?
Explanation: The Dropout layer randomly sets input units to zero during training, effectively regularizing the model and helping prevent overfitting. Flatten is used for reshaping tensors, BatchNormalization is for stabilizing and accelerating training, and 'Densee' is a misspelling of the Dense layer and does not exist.
Which method is used to train a model in the Keras API after it has been compiled?
Explanation: The fit method is designed for training models with data, running multiple epochs and updating weights. Evaluate is used to assess a trained model’s performance, predict makes predictions on new data, and compile is for setting up the model before training.
What is one benefit of using callbacks during model training with the Keras API?
Explanation: Callbacks such as EarlyStopping can interrupt training if a metric like validation accuracy stops improving, preventing overfitting and saving resources. Callbacks cannot change the architecture after model creation, correct data labels automatically, or directly switch hardware usage.