Explore core concepts of Kubernetes logging, troubleshooting techniques, and log management strategies. This quiz helps users strengthen their skills in diagnosing issues and optimizing observability within Kubernetes environments.
When a deployment’s pods are repeatedly crashing, which command best retrieves the container logs for investigation?
Explanation: The 'kubectl logs u003Cpod-nameu003E' command is the most direct way to fetch logs from a specific pod and its containers in a Kubernetes cluster. The other options are incorrect because 'kubectl status' does not exist, 'kubectl list-logs' is not a valid command, and 'kubectl get logs' with container ID is not the standard way to retrieve logs. Using the correct command ensures efficient troubleshooting and minimizes confusion.
How can you access the logs of a previous container instance after a pod restarts due to a crash, without losing the crash details?
Explanation: The '--previous' flag in 'kubectl logs --previous u003Cpod-nameu003E' allows users to view logs from a previously terminated container in the pod, capturing messages before a crash or restart. The other options involve made-up or incorrect flags such as '--old', '--history', or using unsupported subcommands like 'show history', which are not recognized by the Kubernetes command-line tool.
Which method is commonly used to collect and aggregate logs from all pods in a Kubernetes cluster for centralized analysis?
Explanation: Using a sidecar logging agent in each pod is a well-established method for collecting and forwarding logs to a central system for analytics or monitoring. While auto-restart policies can help with application availability, they do not aggregate logs. Viewing logs in a dashboard does not centralize storage, and just assigning labels does not facilitate log collection or aggregation.
A pod is stuck in a 'Running' state but does not receive traffic because it is not 'Ready'; what Kubernetes resource usually contains error details explaining this issue?
Explanation: Details about pod readiness and errors are recorded in pod events, accessible with commands like 'kubectl describe pod'. Persistent Volume manifests store storage configurations, service annotations are for metadata, and network policy rules govern network traffic but do not log readiness or lifecycle events. Pod events are the primary source for troubleshooting readiness problems.
If logs from all pods on a node stop appearing in the centralized log system, what is the most probable cause?
Explanation: A node-level logging agent is responsible for forwarding logs from all pods on its node. If it crashes or fails, no pod logs from that node will be collected centrally. An individual pod’s logging issues, outdated probes, or ReplicaSet scaling down would only affect specific pods, not logs from all pods on a given node.