Explore key steps and concepts to move applications from Docker Compose to robust Kubernetes deployments, including persistent storage and modern DevOps patterns.
What is one major advantage of using multi-stage Docker builds when containerizing a Go application?
Explanation: Multi-stage Docker builds let you compile the Go application in one stage and copy only the resulting binary into the final minimal image, reducing the image size and potential vulnerabilities. Using multiple CPUs or auto-scaling relates more to runtime or orchestration, not build processes. Running several containers in a single image is not a Docker or Go best practice.
When moving from Docker Compose to Kubernetes, which Kubernetes resource is commonly used to safely provide environment variables or secrets to an application running in a Pod?
Explanation: A ConfigMap is intended to provide non-sensitive configuration data, such as environment variables, to Pods. While Deployments and ReplicaSets manage workload orchestration and scaling, they are not for configuration storage. Pods are basic workload units, not resources for managing configuration. For sensitive data, a Secret would be used instead.
Which Kubernetes feature ensures that MongoDB data is preserved even if a Pod restarts or is rescheduled in the cluster?
Explanation: A PersistentVolumeClaim requests storage resources in Kubernetes, ensuring data persistence across Pod restarts or migration. ServiceAccounts handle permissions, Liveness Probes monitor container health, and Ingress Controllers manage external routing, not storage preservation.
How do containers within a Kubernetes cluster typically find and communicate with each other using native Kubernetes features?
Explanation: Kubernetes Services provide consistent DNS names and virtual IPs, abstracting away Pod changes and enabling reliable communication. Using direct IPs is unreliable since Pod IPs can change. Manually editing /etc/hosts doesn't scale, and external load balancers are for external traffic, not internal service discovery.
Why might a team choose to use an Ingress controller instead of a LoadBalancer for exposing APIs in a cloud environment?
Explanation: Ingress controllers allow multiple services to share a single entry point, reducing the number of cloud-based LoadBalancers and thus saving costs. LoadBalancers can handle HTTP traffic and support some path-based routing, but are often more expensive per service. Ingress controllers are not mandatory for internal traffic.