Serverless Orchestration with Step Functions / Durable Functions Quiz Quiz

Explore key principles of serverless orchestration with this quiz on Step Functions and Durable Functions. Assess your understanding of workflows, error handling, patterns, and execution models essential to building reliable serverless applications.

  1. Workflow Coordination

    What is the primary purpose of an orchestration function in a serverless workflow system?

    1. To manage the sequencing and coordination of individual activities within a workflow
    2. To monitor the health of the workflow's external services only
    3. To provide dedicated storage for all workflow data
    4. To execute compute-intensive tasks directly without delegation

    Explanation: The central role of an orchestration function is to coordinate and manage the order and logic of multiple activities or tasks within a workflow. It does not itself perform heavy computation (Option B), nor is it exclusively for monitoring external health (Option C). While it tracks execution state, its main function is not serving as dedicated storage (Option D).

  2. Execution Models

    In serverless orchestration, what makes a workflow execution considered 'durable'?

    1. The workflow runs only while the original function instance remains alive
    2. The orchestration's state persists reliably across system restarts and failures
    3. Execution is automatically retried indefinitely without recording state
    4. The state is managed solely in the function's memory space

    Explanation: A durable workflow ensures its state can be recovered after interruptions, maintaining consistency and reliability. If execution depended solely on the original function instance (Option B) or only in-memory storage (Option C), state could be lost. Indefinite retries without checkpointing (Option D) would not guarantee correct or durable processing.

  3. Error Handling Techniques

    Which approach is most suitable for implementing robust error handling in a serverless orchestrated workflow?

    1. Using retry and compensation logic defined within the workflow specification
    2. Letting the workflow fail silently without notifying operators
    3. Relying only on the default retry policy of the individual activities
    4. Ignoring errors for non-critical activities and proceeding without checks

    Explanation: Defining retry and compensation strategies within the workflow maintains control and transparency, helping ensure failures are managed systematically. Default activity retries alone (Option B) may not address all error cases, while ignoring errors (Option C) reduces reliability. Allowing silent failures (Option D) is unsuitable for resilient workflows.

  4. Pattern Identification

    Which orchestration pattern is ideal for running four independent activities in parallel and then aggregating their results?

    1. Sequence pattern
    2. Chain pattern
    3. Fan-out/fan-in pattern
    4. Async callback pattern

    Explanation: The fan-out/fan-in pattern is designed to launch multiple tasks concurrently and merge their outputs after completion. The sequence and chain patterns (Options B and C) process tasks one after another, not in parallel. The async callback pattern (Option D) handles external event callbacks but not parallel aggregation.

  5. Long-Running Workflow Triggers

    Suppose a workflow must wait several hours for an external approval before proceeding. Which technique allows serverless orchestration to handle this scenario efficiently?

    1. Storing the workflow state in local memory expecting it will persist
    2. Keeping a function continuously running until the approval arrives
    3. Polling the approval system every few seconds from within the workflow
    4. Utilizing asynchronous event-based triggers to resume the workflow after approval

    Explanation: Asynchronous triggers resume workflows only when an external event occurs, minimizing unnecessary resource use. Continuously running functions (Option B) would be costly and inefficient. Frequent polling (Option C) wastes compute resources. Relying on in-memory state (Option D) risks loss if the process is interrupted.