From Docker Compose to Kubernetes: A Complete Practical Guide Quiz

Explore key steps and concepts to move applications from Docker Compose to robust Kubernetes deployments, including persistent storage and modern DevOps patterns.

  1. Containerization Basics

    What is one major advantage of using multi-stage Docker builds when containerizing a Go application?

    1. It ensures the application uses multiple CPUs simultaneously.
    2. It auto-scales the application based on user traffic.
    3. It keeps the final image smaller and contains only the necessary binary.
    4. It allows running several containers in one image.

    Explanation: Multi-stage Docker builds let you compile the Go application in one stage and copy only the resulting binary into the final minimal image, reducing the image size and potential vulnerabilities. Using multiple CPUs or auto-scaling relates more to runtime or orchestration, not build processes. Running several containers in a single image is not a Docker or Go best practice.

  2. Configuration Management in Kubernetes

    When moving from Docker Compose to Kubernetes, which Kubernetes resource is commonly used to safely provide environment variables or secrets to an application running in a Pod?

    1. ConfigMap
    2. Deployment
    3. Pod
    4. ReplicaSet

    Explanation: A ConfigMap is intended to provide non-sensitive configuration data, such as environment variables, to Pods. While Deployments and ReplicaSets manage workload orchestration and scaling, they are not for configuration storage. Pods are basic workload units, not resources for managing configuration. For sensitive data, a Secret would be used instead.

  3. Persistent Storage

    Which Kubernetes feature ensures that MongoDB data is preserved even if a Pod restarts or is rescheduled in the cluster?

    1. Liveness Probe
    2. ServiceAccount
    3. Ingress Controller
    4. PersistentVolumeClaim

    Explanation: A PersistentVolumeClaim requests storage resources in Kubernetes, ensuring data persistence across Pod restarts or migration. ServiceAccounts handle permissions, Liveness Probes monitor container health, and Ingress Controllers manage external routing, not storage preservation.

  4. Service Discovery

    How do containers within a Kubernetes cluster typically find and communicate with each other using native Kubernetes features?

    1. Through Services that provide stable DNS names and virtual IPs
    2. Through external load balancers provided by cloud vendors
    3. Via manually updated /etc/hosts files in every container
    4. By using direct IP addresses assigned on Pod creation

    Explanation: Kubernetes Services provide consistent DNS names and virtual IPs, abstracting away Pod changes and enabling reliable communication. Using direct IPs is unreliable since Pod IPs can change. Manually editing /etc/hosts doesn't scale, and external load balancers are for external traffic, not internal service discovery.

  5. Ingress and Exposure

    Why might a team choose to use an Ingress controller instead of a LoadBalancer for exposing APIs in a cloud environment?

    1. LoadBalancers provide no path-based routing capabilities.
    2. Ingress controllers reduce costs by avoiding per-service cloud load balancer charges.
    3. Ingress controllers are required for all internal networking.
    4. LoadBalancers cannot handle HTTP traffic.

    Explanation: Ingress controllers allow multiple services to share a single entry point, reducing the number of cloud-based LoadBalancers and thus saving costs. LoadBalancers can handle HTTP traffic and support some path-based routing, but are often more expensive per service. Ingress controllers are not mandatory for internal traffic.