Rolling Update Strategy
Which Kubernetes Deployment strategy minimizes downtime during application updates by incrementally replacing old pods with new ones?
- A: Recreate
- B: RollingUpdate
- C: Canary
- D: BlueGreen
- E: The 'Update' strategy
Readiness Probe Importance
Why is configuring a Readiness Probe essential for achieving zero-downtime deployments with Rolling Updates?
- A: It determines the number of replica sets to create.
- B: It ensures the Deployment uses the correct image.
- C: It verifies that a pod is ready to serve traffic before it's added to the service.
- D: It automatically scales the Deployment based on CPU usage.
- E: It restarts pods that fail liveness checks.
maxSurge Parameter
What does the 'maxSurge' parameter in a Kubernetes Deployment's RollingUpdate strategy control?
- A: The maximum number of pods that can be unavailable during the update.
- B: The maximum time a new pod can take to become ready.
- C: The maximum number of pods that can be created beyond the desired number of replicas.
- D: The maximum number of old pods that can remain running during the update.
- E: The maximum percentage of old pods to keep during update
maxUnavailable Parameter
What does the 'maxUnavailable' parameter in a Kubernetes Deployment's RollingUpdate strategy specify?
- A: The maximum number of new pods that must be available before scaling down old pods.
- B: The maximum percentage of time a pod is allowed to be unavailable.
- C: The maximum number of pods that can be unavailable during the update.
- D: The maximum number of unhealthy pods before the update is paused.
- E: The maximum number of pods that can be in the terminating state
Rolling Update Progress
How can you monitor the progress of a Rolling Update in Kubernetes?
- A: By examining the Deployment's event logs using 'kubectl get events'.
- B: By directly checking the status of each pod with 'kubectl get pods'.
- C: By using 'kubectl rollout status deployment/u003Cdeployment-nameu003E'.
- D: All of the above.
- E: By tailing the logs of the kube-controller-manager.
Rolling Back Updates
What command is used to rollback a failed Rolling Update to the previous revision in Kubernetes?
- A: kubectl undo deployment/u003Cdeployment-nameu003E
- B: kubectl revert deployment/u003Cdeployment-nameu003E
- C: kubectl rollback deployment/u003Cdeployment-nameu003E
- D: kubectl update deployment/u003Cdeployment-nameu003E --from-revision=PREVIOUS
- E: kubectl apply -f u003Cprevious-deployment-yamlu003E
Liveness Probe Impact
How does a misconfigured Liveness Probe affect Rolling Updates?
- A: It can cause unnecessary pod restarts, slowing down the update.
- B: It prevents the deployment from scaling up.
- C: It has no impact on Rolling Updates.
- D: It directly influences the 'maxSurge' value.
- E: It causes the update to immediately fail.
Service Selectors During Update
What happens to the Service's selector during a Rolling Update?
- A: The Service selector is updated to point to the new pods only after the update is complete.
- B: The Service selector is automatically updated to include both old and new pods as they become ready.
- C: The Service is temporarily unavailable during the update.
- D: A new Service is created for the updated pods, and the old Service is deleted.
- E: The Service selector is manually updated using 'kubectl edit service' during the Rolling Update.
Controlling Update Speed
How can you fine-tune the speed and aggressiveness of a Rolling Update in Kubernetes?
- A: By adjusting the 'minReadySeconds' parameter in the Deployment spec.
- B: By modifying the 'maxSurge' and 'maxUnavailable' parameters in the Deployment spec.
- C: By using the 'kubectl scale' command to manually scale the Deployment during the update.
- D: Both A and B.
- E: The update speed cannot be controlled.
Interrupted Rolling Updates
What happens if a Rolling Update is interrupted (e.g., due to a node failure)?
- A: The Rolling Update is automatically rolled back to the previous version.
- B: The Rolling Update pauses, and Kubernetes attempts to resume it when possible.
- C: The Deployment enters a failed state and must be manually restarted.
- D: All pods are immediately terminated and need to be re-created
- E: The deployment proceeds as if nothing happened