Challenge your understanding of DaemonSets and Jobs with scenario-based questions focused on core concepts, scheduling behaviors, and practical configuration use cases. This quiz is designed for users seeking to deepen their knowledge of container orchestration and workload management strategies.
Which of the following best describes how a DaemonSet determines where to schedule its pods within a cluster containing multiple nodes, including worker and master nodes?
Explanation: DaemonSets are designed to ensure that a single copy of a pod runs on each eligible node, generally excluding master nodes unless tolerations or node selectors are set. The other options are incorrect: deploying multiple pods per node is not the default behavior, targeting only labeled 'critical' nodes is too restrictive and not default, and scheduling at random with skipped nodes would defeat the purpose of a DaemonSet's guaranteed placement.
In a scenario where you use a Job to process a batch file, which condition causes a standard Job to be marked as complete?
Explanation: A Job is considered complete when its defined 'completions' count is met, indicating the required number of successful runs. The first distractor suggests mere start—not completion—does not suffice. Pods in Pending state mean they have not run yet, so this cannot indicate completion. Node availability does not determine Job completion, but rather the status of pod executions.
How does a DaemonSet handle rolling updates when you change the pod template, and what is the key controller behavior?
Explanation: A DaemonSet performs a rolling update by sequentially replacing old pods with new ones to minimize disruption, matching the updated configuration. Deleting all pods at once would create downtime and is not the default behavior. Ignoring updates is incorrect, as DaemonSets reconcile pod templates. Running duplicate pods increases resource usage and is not a DaemonSet feature.
If a Job's spec sets 'parallelism' to 3 and 'completions' to 6, what will the controller's behavior be regarding pod execution?
Explanation: Setting parallelism to 3 allows up to 3 pods to execute simultaneously, continuing in batches until the desired number of completions—6—are reached. If all 6 pods ran at once, the parallelism control would be ignored, which is not the case. Completing after only 3 pods is inaccurate as 6 completions are needed. Limiting to a single pod at a time contradicts the parallelism setting.
Which scenario is most suitable for deploying a DaemonSet rather than a Job in a container cluster?
Explanation: DaemonSets are ideal for continuous workloads that must run on all nodes, such as logging or monitoring agents. The other scenarios involve finite or one-time tasks—handling files, performing migrations, or batch jobs—which are better suited for Jobs that manage completion and termination. DaemonSets are not intended for ad-hoc or transient workloads.