Kubernetes Logging – Best Practices

100 VIEWS

Maintenance of any system hinges on understanding what is happening and noticing when its behavior has diverged from the ideal.  Kubernetes clusters are no exception, and any team that hopes to keep them running must learn how to understand Kubernetes logs in order to gain insight into their containers’ performance and failures.  However, as Kubernetes can be complex, so are its logs.  

A typical configuration may include hundreds of containers, each with only a temporary existence.  Each Kubernetes cluster includes multiple layers that each produce different types of logs. With the complex and dynamic nature of clusters, keeping track of everything can be challenging.  Let’s take a look at the structure of Kubernetes logs and some of the best ways to keep on top of them.  

Logging with Kubernetes

Logs are generated by Kubernetes at the container, node, and cluster levels.  For logging containers, the easiest method is to write to the standard output and standard error streams.  The container engine then streams them to the logging driver configured in Kubernetes. To access these logs, you can use kubectl log.  You can also use kubectl logs to retrieve logs from a previous instantiation of a container with the “–previous” flag, in case the container has crashed.

You can also write logs to a log file and then use a “sidecar container” which runs in the same pod as the application container, but processes the logs separately. By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints its log to its own stdout or stderr stream.  This approach allows you to separate several log streams from different parts of your application, some of which can lack support for writing to the standard output and error streams.

Note that with kubectl logs, you cannot view multiple pod logs simultaneously.  For anything beyond the most basic debugging, it is helpful to have a way to quickly check the logs for multiple pods.  A handy tool for this is Kubetail, which runs kubectl logs on multiple pods and combines the results into a single data stream.

Logs at the node level will vary in their format and location depending on the host operating system.  For instance, on a Linux server you can access the logs with the command “journalctl -u kubelet”. On other operating systems, the logs are found in the “/var/log” directory.

One thing you will need to manage is log rotation; otherwise, logs will consume all available storage on the node.  Kubernetes does not automatically rotate logs, but it does set a default policy that if a pod is evicted from the node, all corresponding containers are also evicted, along with their logs, which helps to keep the storage of logs from expanding forever.

At the cluster level, each of the different components that comprise a Kubernetes cluster can be logged.  The components that run on the operating system write logs to journald, and components running in containers write logs to the /var/log directory.  Kubernetes does not provide a native solution for cluster-level logging, so this is where using third-party tools to manage logs becomes truly essential.

Tools for Kubernetes Logging

A logging system should provide four services to facilitate monitoring your clusters.

  • Aggregation: The system should pull logs from different nodes into one location.
  • Storage: The logs should be stored in a way that is both easily searchable and scalable.
  • Visualization: The data from logs should be presented in a way that makes it easy to see the performance of your clusters.
  • Alerts: When your clusters diverge from the desired state, you want to be alerted immediately, in order to respond to the incident.

A popular tech-stack for meeting these needs is the ELK stack, comprised of Elastisearch, Fluentd, and Kibana.  Fluentd is an open-source log aggregator that allows you to collect logs from your Kubernetes cluster and load them to your chosen data storage.  Elastisearch stores the logs from Fluentd and provides a scalable, RESTful search and analytics engine for accessing them. Kibana provides a user interface to query and visualize logs, and can be configured to deliver event-triggered alerts.  This combination of tools effectively meets all of the important services one needs to monitor Kubernetes.

If you need to retain copies of old logs for compliance, you will also need inexpensive, long-term storage for log archival.  You can store old logs in an AWS S3 bucket, or with AWS Glacier, if it would be rare for you to need access to these logs. Alternatively, if you do not need to store old logs, then you should have a system for deleting logs according to your retention policy.  You can use Elastisearch to create daily indices for your logs and then delete those older than n days.

Conclusion

As you can see, logging in Kubernetes can be a complex process, but effective logging is essential to monitoring your application and keeping it running.  Understand the basic logging features available in plain Kubernetes; but, don’t hesitate to look beyond them, to services that bring the information stored in those logs to light.  Data is only useful if it is understood, so being able to gather all your logs into a single, easily understandable dashboard that converts the stream of data into meaningful metrics is worth the complexity of setting it all up.


Alison Forster is a Software Engineer working at 3M Health Information Services. She lives with her family in Albany, NY along with eight chickens.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu