Technologies are developed with the goal of solving problems. But in most cases, new technologies introduce new problems of their own.
This is certainly true for three of the most important new technologies to disrupt the IT landscape over the past decade: microservices, containers and Kubernetes. While microservices architectures and containerized applications solve many problems, they also give rise to special challenges in the realm of management, security and compliance — challenges that Kubernetes itself is not designed to address fully.
Using these new technologies effectively requires solving these challenges while, at the same time, leveraging the scalability and agility that containers and microservice inject into application deployments.
In this post, we take a look at the specific challenges that organizations must address in order to use microservices and containers successfully, then we discuss best practices for resolving those problems.
Security, compliance and containerized applications
The core reason why microservices and containers introduce new challenges for IT teams is simple: they add more layers and moving pieces to application deployments. With more complexity comes more difficulty in meeting security and compliance requirements.
Specifically, the layers in a typical microservices application deployment today include:
- Container images, which contain application code.
- A container runtime, such as Docker, which serves to spin up individual containers.
- Kubernetes, which manages all of the containers.
- The underlying infrastructure that hosts the container images, runtime and Kubernetes orchestrator.
Each of these layers is subject to its own set of potential vulnerabilities and security risks. Container images could contain malicious code. The container runtime or Kubernetes could suffer from vulnerabilities that enable privilege escalation or unauthorized access to resources that form part of the container cluster. The host infrastructure could be compromised at the operating system level or through vulnerabilities or misconfigurations in a cloud provider’s IAM framework.
To make matters even more complicated, enforcing a strong security architecture within a containerized environment presents its own set of various requirements, including:
- Isolating containers and pods from one another.
- Isolating hosts from container and pods.
- Managing the various networks that compose a container cluster, and ensuring that networks are isolated from one another.
- Securing any persistent data storage on which the container cluster relies.
- Meeting compliance requirements in an environment that is highly dynamic and constantly changing. Audits that take weeks or months to complete, and that follow a “waterfall” schedule, can hardly keep up.
Compared to traditional architectures such as virtual machines, these requirements make the task of IT teams considerably more complicated. There is no special effort required to isolate virtual machines from one another, nor do virtual machines typically require isolation between different networks, application components or storage resources. But these are challenges that IT teams must meet when deploying containerized applications.
Containers create new security opportunities
If you’ve read this far, you might think that containers and microservices are fundamentally difficult to secure, and that they simply require much more effort to manage properly.
Fortunately, however, that’s not really true. While containers certainly create some new security challenges, they also empower IT teams with new tools and strategies to help meet those challenges.
Declarative configuration: Secure from the start
One is the ability to take a declarative approach to environment configuration. In a Kubernetes cluster, virtually everything can be configured via straightforward JSON or YAML files. IT teams can therefore create a set of files that define how the cluster should behave, how various components should be isolated and so on, then deploy them to build the environment. In this respect, it is easier to integrate security right into a Kubernetes environment’s configuration than it would be to build the environment first, then secure it after the fact.
Kubernetes and containers also make it easy to leverage immutable infrastructure, meaning (in the case of Kubernetes) that new containers and pods are deployed by completely destroying their predecessors rather than applying updates to running components. Immutable infrastructure allows teams to vet new software releases more thoroughly prior to deployment, and reduces the chance that an unforeseen configuration problem could introduce security risks into a production environment.
Larger attack surface does not mean less security
Thus, it’s not the case that containers, microservices and Kubernetes make security and compliance harder. They simply increase the attack surface that IT teams need to manage, due to the various layers and components that they introduce. But at the same time, these technologies make it possible to take advantage of new tools and strategies for helping to secure that larger attack surface.
Best practices for securing modern architectures
Beyond the specific strategies outlined above, there are several key approaches that organizations can take to making containerized, Kubernetes-based environments as secure as possible.
Build security in early
A Kubernetes cluster is too complex to secure effectively once it is up and running. Instead, teams must design the cluster to be secure before it is even started. As noted above, declarative configuration helps to achieve this goal.
Make security part of the code
Along similar lines, IT teams must make security part of the code that they use to deploy and manage containerized applications. In other words, simply scanning for vulnerabilities or signs of a breach in a live environment is not enough; the code that controls the environment, and that powers the applications running in it, must itself be secure, and it must be audited automatically and continuously to detect insecure configurations.
With this approach, IT teams can migrate from a break/fix approach to security (wherein they find vulnerabilities only once they have become a problem) to one that makes environments secure by default.
Secure all the layers
Because so many layers compose Kubernetes-based environments, and each layer requires its own types of security audits and monitoring, IT teams must take measures to secure each layer individually. Unlike simpler forms of infrastructure, containers and microservices can’t be secured effectively by focusing on just one layer.
Know the limitations of native tools
Kubernetes comes with some security features which IT teams can and should use to help achieve isolation and mitigate security risks. However, Kubernetes itself is not a security tool. It’s critical to be aware of the limitations of the security functionality it provides, and to know which additional tools can be deployed to fill in the gaps.
Modern infrastructure that is built with microservices, containers and Kubernetes has a much larger and more intricate attack surface than legacy applications. There is no denying that. However, with this additional complexity comes new opportunities for managing the additional security challenges imposed by modern environments.
The key to a secure Kubernetes deployment is learning to take advantage of these security opportunities, rather than clinging to legacy security practices that just don’t work in modern environments.