Logging for Kubernetes


· · · ·

Logs aren’t going anywhere, and good log management is the basis from which every successful application support and operations team manages to perform most of their jobs day in and day out.

Even in the world of cloud-native, container-based applications, logs are still at the heart of troubleshooting and tracing the root cause for everything from performance degradation to crashed applications. There is a reason that even the vendors at the core of the Application Performance Management (APM) market have all added centralized log management capabilities to their products to improve customer acceptance.

In this article, we take a look at the basics of logging for a container-based environment, with a focus on Kubernetes logs.

What does Kubernetes log?

When applications are running through an abstraction layer, whether it’s traditional middleware or in containers running via an orchestration engine (e.g. Kubernetes), there are three categories of logs that are generated. Depending on who you are, you may not even be given access to different areas of logs because of where they are stored. All three categories can be managed at the Kubernetes cluster level.

The first category of logs are at the pure infrastructure level. In Kubernetes-speak, these are node logs. These logs are at the operating system level and track hardware events, audited items like logins and sudo, and system daemons like cron and runC.

The next category of logs are the logs generated by all the abstraction layers, which in non-cloud terms is middleware software like WebSphere Application Server node agents, and in cloud-native environments using microservices, they are generated by the orchestration and management layer á la Kubernetes.

The last category up are the logs that the application stack generates and outputs via stdout, stderr, and popular frameworks like slf4j. These logs are what the development groups want to see and use. Moving to containers and running on Kubernetes makes working with these logs more difficult, as containers are ephemeral and don’t stick around after they are stopped, so any logs that weren’t exported are lost. The one exception to this with Kubernetes is if a container is simply restarted by the kubelet on each node, which keeps one terminated container intact unless the pod is evicted. This is a nice feature, but not reliable enough on its own.

Best way to deal with logs from Kubernetes

Centralized logging is the best way to work with logs in any organization with more than one application running on one system. This is especially true with container-based workloads, as they are widely distributed across one or more nodes in one or more regions.

Ideally, your organization already has centralized logging in place, and it has native support for Kubernetes. If this isn’t the case, then there is a solid case to put forward to implement a more capable logging solution and then start with Kubernetes and then migrate other logs to the new solutions as time and budget allows.

Whether you are new to centralized logging or not, the three key criteria for a solid log consolidation solution are ingesting from sources that matter to you, the ability to search what was ingested across sources, and the ability to use the repository as an archive.

Between those three key features, the one that is the most varied is the ability to ingest logs from the sources that matter to you—and in this case, that is from Kubernetes. As described above, Kubernetes has multiple categories of logs, and each category has multiple means of logging that could be configured—from modifying each pod’s configuration to leveraging the concept of sidecars and attaching them to each deployed pod. By far the best and most effective way to implement Kubernetes logging is to use cluster-level logging. Solutions like LogDNA have this available as a pre-built configuration which can be implemented on any Kubernetes cluster (on any cloud) in as few as two commands and includes every category of logs.


Centralized log management is a core capability of any organization successfully operating in today’s technology landscape of clouds, containers, and cloud-native apps. Using a single tool that can consolidate logs from all components across your technology portfolio will prove to be a solid investment, time and time again.

Vince Power is an Enterprise Architect with a focus on digital transformation built with cloud enabled technologies. He has extensive experience working with Agile development organizations delivering their applications and services using DevOps principles including security controls, identity management, and test automation. You can find @vincepower on Twitter. Vince is a regular contributor at Fixate IO.


Leave a Comment

Your email address will not be published. Required fields are marked *

Skip to toolbar