DaemonSets u0026 Jobs Fundamentals Quiz Quiz

Challenge your understanding of DaemonSets and Jobs with scenario-based questions focused on core concepts, scheduling behaviors, and practical configuration use cases. This quiz is designed for users seeking to deepen their knowledge of container orchestration and workload management strategies.

  1. DaemonSet Pod Placement

    Which of the following best describes how a DaemonSet determines where to schedule its pods within a cluster containing multiple nodes, including worker and master nodes?

    1. A DaemonSet schedules one pod per node by default, typically excluding master nodes unless configured otherwise.
    2. A DaemonSet chooses nodes at random and may skip some nodes entirely.
    3. A DaemonSet deploys multiple pods on each node to maximize resource usage.
    4. A DaemonSet schedules pods only on nodes with the 'critical' label, regardless of their state.

    Explanation: DaemonSets are designed to ensure that a single copy of a pod runs on each eligible node, generally excluding master nodes unless tolerations or node selectors are set. The other options are incorrect: deploying multiple pods per node is not the default behavior, targeting only labeled 'critical' nodes is too restrictive and not default, and scheduling at random with skipped nodes would defeat the purpose of a DaemonSet's guaranteed placement.

  2. Job Completion Criteria

    In a scenario where you use a Job to process a batch file, which condition causes a standard Job to be marked as complete?

    1. All pods reach a Pending state simultaneously.
    2. The Job controller detects that no more worker nodes are available.
    3. At least one pod starts execution, regardless of success or failure.
    4. The specified number of successful pod completions is reached.

    Explanation: A Job is considered complete when its defined 'completions' count is met, indicating the required number of successful runs. The first distractor suggests mere start—not completion—does not suffice. Pods in Pending state mean they have not run yet, so this cannot indicate completion. Node availability does not determine Job completion, but rather the status of pod executions.

  3. DaemonSet Rolling Updates

    How does a DaemonSet handle rolling updates when you change the pod template, and what is the key controller behavior?

    1. It duplicates pods, running both the old and new versions on each node for redundancy.
    2. It gradually replaces existing pods on each node with new ones based on the updated template.
    3. It instantly deletes all old pods across nodes before creating the new pods simultaneously.
    4. It ignores updates to the pod specification, leaving existing pods unchanged.

    Explanation: A DaemonSet performs a rolling update by sequentially replacing old pods with new ones to minimize disruption, matching the updated configuration. Deleting all pods at once would create downtime and is not the default behavior. Ignoring updates is incorrect, as DaemonSets reconcile pod templates. Running duplicate pods increases resource usage and is not a DaemonSet feature.

  4. Job Parallelism Control

    If a Job's spec sets 'parallelism' to 3 and 'completions' to 6, what will the controller's behavior be regarding pod execution?

    1. Six pods will be created and executed at once without any restrictions.
    2. At most 3 pods will run concurrently until 6 total completions are achieved.
    3. Only a single pod can run at any given time regardless of completions needed.
    4. The Job will finish immediately after the first three pods succeed.

    Explanation: Setting parallelism to 3 allows up to 3 pods to execute simultaneously, continuing in batches until the desired number of completions—6—are reached. If all 6 pods ran at once, the parallelism control would be ignored, which is not the case. Completing after only 3 pods is inaccurate as 6 completions are needed. Limiting to a single pod at a time contradicts the parallelism setting.

  5. DaemonSet vs Job Use Case

    Which scenario is most suitable for deploying a DaemonSet rather than a Job in a container cluster?

    1. Launching a temporary database migration task.
    2. Processing a set of files in parallel, completing when all are processed.
    3. Running an ad-hoc image processing batch that ends after completion.
    4. Ensuring that every node runs a log collection agent continuously.

    Explanation: DaemonSets are ideal for continuous workloads that must run on all nodes, such as logging or monitoring agents. The other scenarios involve finite or one-time tasks—handling files, performing migrations, or batch jobs—which are better suited for Jobs that manage completion and termination. DaemonSets are not intended for ad-hoc or transient workloads.