Overview
In this article, we explain why Docker was deprecated in Kubernetes, and why this deprecation was necessary. We also explain what it means for developers and what measures you can take to continue running your Docker container runtime images.
If you are a DevOps Engineer, Systems Administrator, or simply someone who is familiar with Linux-based applications, you may have come across the term containers or containerization. This technology makes it possible to run platform-agnostic applications (more on this later). The first container runtime engine that popularized containerization is Docker. Other new technologies such as CRI-O and Kubernetes (K8s) were later introduced. However, Kubernetes has recently declared that it is no longer supporting Docker engines.
Containerization
The Internet defines containerization as a deployment approach whereby the software code, together with all the necessary frameworks needed to run the application – libraries, dependencies, configuration files, binaries – are bundled and isolated into something we call container images. This makes it possible to run a containerized application regardless of the operating system or infrastructure being used; i.e., platform agnostic. It also saves DevOps engineers from having to do extra work on configuring each infrastructure or platform.
The Real Issue – Deprecation of Docker
In December 2020, Kubernetes, which is an orchestration tool for managing multiple container runtimes, announced that it has deprecated its support for the Docker runtime engine. This news raised a lot of concerns as several enterprise applications are using Docker-based container images. You may wonder, “Why did Kubernetes drop support for Docker?” To answer this, we need to understand the composition of these technologies. Perhaps, that may provide some insight.
The Docker Engine
The Docker runtime is one of the popular container runtime engines. In fact, according to Docker, over 3.5 million applications have been containerized using Docker. The growing need for managing multiple dockerized applications necessitated orchestration and scaling tools such as Kubernetes, which is a product of Google. This means that several Docker applications can run in a Kubernetes cluster. A typical docker engine (the core part of Docker) has 3 components:
- Server – it contains the dockerd daemon which creates and manages container images, storage, network, etc.
- CLI – the command line interface for writing docker commands
- API – a REST API interface for interacting with the dockerd daemon
The diagram below depicts the components of the Docker engine.
The Docker Engine Architecture
Zoom in to Docker Runtime – Dockershim
From the image above, the only component that Kubernetes needs to run and execute containers in the cluster is the Docker Runtime Engine, a sub-component of the Docker server. This is largely because Kubernetes has its own infrastructure that manages the other components such as volumes (K8s), networking, storage, APIs and the CLI (Kubectl). Internally, the Docker runtime engine has another library or interface called dockershim, that allows Kubernetes to run or support Docker-based containers. The dockershim, now owned by Mirantis, is actually what is being deprecated and will eventually be removed from the Kubernetes code in later 2021.
Why the Deprecation?
A new standard called Container Runtime Interface (CRI) was instituted to define protocols and guidelines that make it easier to integrate different container runtimes into an application such as Kubernetes. Sadly, the Docker in-built runtime, dockershim, does not comply with the CRI. Consequently, it became a burden for Kubernetes maintainers to manage the dockershim runtime. For this reason, Kubernetes announced the deprecation since…
- Deprecating the runtime reduces the number of resources that needs to be maintained. For this reason, maintaining the Kubernetes infrastructure becomes a bit easier compared to previously.
- Secondly, by reducing the number of resources essentially, the security risks associated with third party dependencies get minimized, thereby improving the overall security of the Kubernetes infrastructure.
What are the Alternatives to Docker?
Besides the Docker engine, there a few container runtime engines that are currently supported on Kubernetes, and available on Amazon’s Elastic Kubernetes Service (EKS), Microsoft’s Azure Kubernetes Services (AKS) and Google Kubernetes Engine (GKE). These alternatives or substitutes are:
- Containerd – The second most popular container runtime already supported on GKE. It works on Linux and Windows platforms and provides similar functionality to the Docker runtime.
- rkt (Rocket) – It supports pod-based container clusters and has robust security mechanisms. It is easily interoperable with other platforms (such as Kubernetes), since it conforms to the CRI standards.
- CRI-O – It is a lightweight container image that is supported on Kubernetes. It can create, manage and pull containers to registries. It is also implemented following the CRI standards.
- Frakti – It boasts of better security and isolation. Thus, it provides a pod interface for Kubernetes. It also supports mixed runtimes modes on a single node.
What Should You Do Now?
If you have an application that uses Docker containers and is running on Kubernetes, I presume you may be worried now. Don’t fret. Since Kubernetes mentioned that it will continue support for the Docker engine until late 2021, you have ample time to plan your migration to a different runtime. Until then, the action you may take could be influenced by these categories.
Kubernetes Users / DevOps Engineer
If you are a developer or a DevOps engineer who uses an already existing and running Kubernetes cluster such as GKE, you don’t have to take any action since it is not your responsibility to manage the container runtime directly. The Kubernetes cloud providers such as GKE, EKS and AKS handle the container runtime for you. So, in the event they change the underlying container runtime to a different vendor, you are not affected, and you should not take any action.
In-house / on-premise Kubernetes Cluster
If you are a Systems or Kubernetes Administrator who set up a Kubernetes cluster (from scratch) in an on-premise server environment or a virtual machine due to security reasons, you probably installed the runtime engine, networking, storage, etc. on your own. In this case, you need to take action. And that is by changing your container runtime to either containerd or rkt, or CRI-O as highlighted above. Again, you have up until late 2021 to apply these changes. If for some reason you still want to use Docker runtime, you could install the dockershim as an external and standalone container runtime in your Kubernetes cluster since Docker and Mirantis plans to make the dockershim an open-source project.
Conclusion
The purpose of this article is to explain why Kubernetes dropped support for the Docker runtime engine. It also suggested some alternatives to the dockershim runtime, which can be used on a Kubernetes cluster. Lastly, it highlighted actions points to take if you are a Kubernetes Administrator or a DevOps engineer.