From the smallest startups to the largest global enterprises, Kubernetes has become a staple in many modern digital strategies. As it gains ground and moves into regulated and security- sensitive areas of the technology space, Kubernetes has come under more scrutiny to provide secure out-of-the-box solutions.
With that challenge in mind, let’s take a look at how to create a highly secure Kubernetes application. This article isn’t the be-all, end-all of Kubernetes security (it would take more than an article to cover that), but it offers pointers on how to take advantage of different features in Kubernetes to improve the overall security posture of applications deployed on the platform.
RBAC features were introduced in v1.6 of Kubernetes to allow enterprises to leverage projects. RBAC functionality allows configuration of fine-grained permission sets at both the individual namespace and cluster levels of a Kubernetes cluster, and then assigns which user or role can use that specific permission set.
The RBAC functionality within Kubernetes is strictly for authorization, not authentication. Basic authentication options are available, including htpasswd, LDAP, and certificate-based. If you need more robust integration between Kubernetes and an existing enterprise directory, then leveraging the native OpenID functionality is the best route.
Network isolation is an easy and common way to provide basic security to a given deployment of any type of technology. When it comes to container deployments, this used to be simplistic — just putting a wall around the entire cluster and calling it a day. But this approach is not adequate to support any forward-looking strategy to leverage containers for what they are capable of providing in terms of flexibility and recoverability (among other things). Network segmentation is now happening within the Kubernetes cluster, and it needs to be integrated into a holistic network strategy.
Leveraging one of the many Kubernetes-friendly networks available in your cluster allows anything from simplistic segmentation by isolation of each namespace with services routed through network proxy functionality to complex network policy-based isolation, which is more fine-grained and allows each application at deployment time to define its own network rules and control what ports are available and what services have access.
The ability of multiple teams or organizations to work within a Kubernetes cluster with enough isolation to avoid stepping on each other allows better use of corporate resources and the introduction of centralized container environments that can be made widely available to every part of an organization. This allows small sub-organizations to execute their digital strategies without needing to find Kubernetes talent, or worrying about their security profile.
Multi-tenancy is a combination of successfully implementing network isolation and RBAC. There are scenarios where even having multi-tenancy is not enough of a separation, especially within the enterprise space. These scenarios often involve entirely separate environments being built and deployed. The reasons can include everything from testing a new networking component without impacting any existing product development to having data residency requirements (as is common in Europe),to special data and network access required by local authorities, or something as simple as quality assurance testing that you want to ensure has zero chance of accidentally touching production instances.
Yes, persistent storage is a huge thing in the world of containers. Containers may have started selling the utopia of a stateless world, but the simple reality is that most apps, at the end of the day, access and often generate data. Kubernetes offers storage options through persistent volumes which can be created and assigned to individual pods, or across multiple pods when the underpinning storage technology supports multiple access points like S3 or ceph.
It’s great to have storage access and mapping handled by the Kubernetes cluster — It takes into account any policies that are applied to the application as part of its mapping, including quotas.
At the same time, Kubernetes is not in the storage business, so the underpinning technology can be provided by a huge number of vendors, many of which the average enterprise already has relationships and experience with to grow capacity as required.
Kubernetes has the ability to handle secrets by mapping variables inside its configuration files, and templates to entries in etcd that are only available to service accounts. This is a good way to get started with secrets so developers can deploy to multiple environments without actually seeing the passwords or private certifications through the command line or in the web console. (This is not completely secure — there are ways to retrieve passwords.)
This fact has moved more and more Kubernetes users to leverage third-party secrets management which includes products by industry veterans in the space (like CyberArk), and other proven technologies, like Vault from HashiCorp. For companies leveraging the public cloud, each cloud also has its own way to manage keys.
Since the primary purpose of Kubernetes is to orchestrate containers, having some level of assurance that what is in those containers is secure brings peace of mind. (It is too much for any team that supports Kubernetes to try to get into the ever-growing number of development streams which end up deployed on their platforms.)
A Kubernetes admin can ensure a minimum security profile by setting some criteria on what will be allowed to run within the environment, and by having all containers delivered by the development teams pass through one or more security scanning tools to provide a neutral third-party assessment. There are open source projects like Clair that can do basic static analysis. Multiple commercial offerings in the container scanning space provide more in-depth analysis and additional types of security scans — for example, fuzz and dynamic — which include Synoposys, Blackduck, Twistlock, and Aqua Security. In addition to scanning the images and tagging so filters can be applied before deploying, some of these products can also scan while they are running and can react to changes in a container’s security profile in a live Kubernetes environment.
An example would be deploying nothing that has an unpatched, critical CVS, but lower severities are fine. So a scanning tool that analyzes everything coming into the registry and then routinely scans what is already in the registry and tags those images with the types of vulnerabilities found goes a long way to provide peace of mind and meet audit requirements.
Security is no longer just bolted onto Kubernetes. It is now built-in, right down to the core components. Configuring all of Kubernetes’s security features requires effort, and supplementing the native security with additional components to meet your specific needs can also be time-consuming. To save time and effort, there are multiple enterprise vendors which specialize in providing already securely-configured enterprise distributions which are deployable and usable on day one. By leveraging one of these distributions, you can focus on the business and be more secure without needing full-time engineers to put the pieces together.