Taking a Security-First Approach to Kubernetes


Traditionally, security was largely a break/fix affair. IT teams deployed tools that helped them detect breaches as they were happening, or find secure vulnerabilities within code after it had been written.

That approach works in slow-moving, conventional environments. But a Kube-native world demands a different approach. It requires making security a priority from day one; not after your clusters and delivery chains are already running.

In this article we take a look at why this is, and best practices for making security a key consideration from the very start in Kubernetes-based environments.

Why Kubernetes demands a security-first approach

By its nature, Kubernetes differs in key respects from other types of infrastructure and application-management technologies. Compared to an environment built using virtual machines or bare-metal servers, a Kubernetes environment:

  • Consists of many moving parts – container images, containers, pods, storage volumes and more. All of these parts not only create a great deal of complexity, but also mean that there are more variables (and more room for error in configuring those variables).
  • Is highly dynamic and fast-moving – One of the core design goals of Kubernetes is to auto-scale and load-balance application deployments on a continual basis by starting and stopping containers, redistributing workloads across pods and so on. There is no “normal” in a Kubernetes environment.
  • Is difficult to retrofit – Kubernetes is designed to streamline container deployment and management. It’s not designed to make it easy to roll back a deployment or tweak a configuration once an application is live. Thus, remediation becomes more difficult after the fact.

For all of these reasons, Kubernetes requires IT teams to think about security from the very start -– when they begin designing their architecture and deployment. And they need to keep thinking about security at every stage of the process, and secure all of the layers and tools that make up a Kubernetes environment.

What’s more, taking a security-first approach is important not just for designing and deploying applications in a secure manner. It also matters because it facilitates fluidity among the various stakeholders who have a role to play in securing Kube-native workloads – from developers and the IT Ops team to engineers who oversee security, auditing and IT governance.

Enabling Kubernetes security without slowing release velocity

It’s easy to talk about securing Kubernetes. But how do you actually take a security-first approach without getting in the way of your team’s ability to roll out new application releases quickly and continuously?

Part of the answer lies in making security audits an integral part of the release pipeline. Security tests should be performed automatically and continuously, just like integration tests and QA tests. When you integrate security into your pipeline, you get thorough coverage without delays.

Also critical is establishing clear governance requirements that define policies, such as how containers and pods should be isolated and which types of privileges containers can claim. Then, use automated tooling to scan for non-compliance with those policies within your Kubernetes configuration.

Similarly, automated tooling should allow you to scan all of the layers of your Kubernetes environment – from the application code and dependencies inside containers, to internal and external-facing networks, to the container runtime – for signs of vulnerabilities or breaches. By securing all of the layers of your environment, you maximize your ability to identify security problems that were not detected by pre-deployment tests or controlled by your governance policies.

And when a problem is detected, whether it affects pre- or post-deployment code, you need all stakeholders to be able to communicate effectively to solve it. That’s why it’s critical to have a continuous feedback loop that includes IT Ops, developers and security engineers, as well as tooling in place for them to communicate quickly and clearly.

Best practices for deploying Kube-native apps securely

Beyond the points described above about securing Kubernetes without slowing release velocity, there are several other best practices that IT organizations can follow to take a security-first approach to Kubernetes.

One is to know what Kubernetes itself can and cannot do when it comes to security. While Kubernetes offers several useful built-in security features, such as pod security policies and role-based access control, there are many other things that Kubernetes cannot do – such as scan containers for vulnerabilities or monitor for signs of a runtime breach. Your IT team must know Kubernetes’s limits, and adopt third-party tools that can fill the gaps.

You should also be wise about any third-party dependencies that you incorporate into containers. While the ability to use third-party code easily is part of what makes containers so powerful, code written outside of your organization is code that you can never fully trust. Make sure you scan upstream code for known vulnerabilities, and have a solution in place for remediating them quickly.

Finally, educate yourself, too, on any nuances that may apply to your Kubernetes deployment. Managed Kubernetes on a cloud such as AWS differs in some respects from Kubernetes on GKE, for example; and both have different security implications from an on-premises Kubernetes deployment. Be aware of which access-control and host security tools are available on the infrastructure you use to host Kubernetes, and use those tools accordingly. 


Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.


Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

%d bloggers like this: