The What and Why of Kube-Native Environments

193 VIEWS

Kubernetes is an orchestrator that manages containerized workloads. Given the central role that Kubernetes has assumed in deploying and managing modern applications, it often does not make sense to think of the tool as a mere orchestrator. Instead, Kubernetes has become the glue that melds together entire application environments and infrastructures.

Those deployments can be said to be Kube-native, meaning that Kubernetes is their foundation. Kube-native architectures change the nature of the way in which application environments are built and managed while also introducing special security considerations.

Let’s take a look at what all of this means.

What is Kube-native?

Kube-native refers to an application architecture in which Kubernetes serves as the central management tool not just for orchestrating containers but for tying together and managing all of the components and layers that comprise an application environment from build, to deploy, to runtime.

In a Kube-native environment, developers and admins can turn to Kubernetes as a single source of truth for understanding and managing all components of the environment.

Specifically, Kubernetes allows DevOps to pull critical information from Kubernetes about environment state and security, including:

  • The way deployments are configured, and the privileges available to them.
  • Whether a container or application is running in a test or production mode.
  • How network paths are configured, and which networks or endpoints are exposed to the public Internet.
  • Who “owns” the app, or is responsible for managing it.
  • Which processes are running, and whether any of them may be suspicious from a security perspective.

At the same time, Kubernetes also makes it possible to enforce security and configuration policies across an application environment. It provides a central interface for tasks such as:

  • Granting or removing access to resources.
  • Scaling the environment’s size up and down, as demand requires.
  • Isolating containers, pods, networks and clusters.
  • Killing containers and pods.

The pros and cons of going Kube-native

Adopting a Kube-native strategy provides a number of opportunities, but it also introduces some special challenges.

Pros

Advantages include, above all, centralized management via a single platform and single source of truth. Given the complexity of modern application environments and delivery chains, the centralization and simplicity of administration that Kubernetes provides in this respect is a powerful benefit.

Kubernetes also offers the advantage of being a tool that is always evolving. As an open source project with a massive number of contributors, Kubernetes grows fast and gains new features constantly – with new releases coming out every quarter. Thus, teams that choose to use Kubernetes as the basis for their application strategy are able to look forward to a constant stream of new functionality.

And because Kubernetes is so popular, it’s a tool that most developers and SREs know or are eager to learn. The industry skill base supporting Kubernetes in only going to get stronger.

Challenges 

At the same time, however, Kube-native strategies also introduce some special challenges.

The fact that Kubernetes is always improving is an advantage, but constant changes means IT teams must keep up with the new additions to Kubernetes.

A second key challenge of going Kube-native is the fact that, although Kubernetes offers many rich management and orchestration features, it is not a silver bullet. Its features are limited, especially in realms like security (after all, Kubernetes was not designed as a security tool), and it doesn’t always provide the easiest UI. It therefore requires extra effort, especially in security, to plug the gaps.

For this reason, successfully adopting a Kube-native strategy requires taking care to maintain Kubernetes environments in a way that makes them easy to manage and secure. Specifically, DevOps must ensure:

 

  • Visibility across the environment, especially in areas (such as the code inside container images) that Kubernetes itself cannot monitor.
  • Parity, to ensure that applications behave the same during testing and production.
  • Immutability, in which teams build new assets and kill existing ones rather than patch.
  • Portability, so that applications and clusters can be easily moved between different Kubernetes environments or infrastructures.

Conclusion

Kubernetes is much more than an orchestrator. It has evolved into the core of modern application architectures, providing visibility and management features that can be used across the environment. But Kubernetes is not perfect, and adopting a Kube-native strategy requires teams to assess the limitations of Kubernetes’s feature set and augment with security tooling and team processes to address those gaps.

http://www.fixate.io

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu