When People think of containers, the first thing that comes to mind is Docker. But Docker is not the only solution. From the title you might wonder why would you want to run the containers in an untraditional way. I’ll justify this in slight detail in the next few minutes.
You might have been hearing about a Monolith’s containers being broken into Microservices; even into nanoservices. But the ecosystem (various tools) that can support these microservices still resembles the monolith, where a lot of functions are built into a single daemon and everything needs to interact with this daemon to get their work done. I believe a clean separation of concerns and functionality is not only for the applications being built, but also for the tools that can support these applications.
The idea of containers has been in the industry for quite a while. But the adoption of the containers has been boosted after Docker was released in 2013 as an Open-Source project. This was because Docker was an all-in-one Open-Source project/tool which could build, run and manage containers. This boosted the adoption of containers greatly. Then came Kubernetes to handle Docker containers on multiple machines, with a single interface. Initially, Kubernetes supported only Docker as a container runtime, so it is understandable that it led to the equation Containers = Docker, and vice versa.
But things change. Docker now does much more than it previously did; and containers have a clear definition, which doesn’t equal it to Docker.
At this point in time, Kubernetes is trending and being aggressively adopted by a lot of people/companies. Recently, the term “Kubernetes” found its way into a Korean TV series as well, which I’ve gotten to know through Twitter.
Initially, Kubernetes only supported Docker as a container runtime, and later added rkt. But up until now it has been hard to debug or add support for new container runtime as it would require manual tweaks to Kubelet. This led to a lot of user concerns, and there was no clean separation of concerns. At one point the community decided to change the way this works.
There are mainly two motivations for the rise in Container Runtimes other than Docker.
- Kubernetes Container Runtime Interface (CRI)
- Open Container Initiative (OCI)
Container Runtime Interface (CRI)
In Kubernetes 1.5 release, the Container Runtime Initiate (CRI) was introduced to support more container runtimes with Kubernetes without needing to modify and recompile the Kubelet. Now Kubernetes supports a wide range of container runtimes like CRI-O, containerd, Singularity, etc.,
If you are new to Kubernetes and don’t understand what you read above, go through this article on “An overview of Kubernetes CRI.” It covers CRI in more detail.
The Open Container Initiative (OCI)
Besides CRI, the second thing that motivated the Container Runtimes was the standardization of the Container Image Format by the Open Container Initiative (OCI), which was founded in 2015 by Docker and a few other tech giants.
This standardization enabled the users to move to a different container runtime that can use OCI format Images with ease, as the image format is unchanged.
Among a lot of other container runtimes that emerged/boosted in the rush, Podman was the one that took an untraditional approach to running containers.
Docker vs Podman
Docker runs containers. So does Podman. End of the story. Or, maybe not. Even though both run containers, both have fundamentally different approaches.
Docker uses a Client/Server model where the CLI is the Client and the Daemon is the Server (the Daemon can be running in a remote machine) each time the Client needs to communicate with the Server. Whereas Podman uses a Fork/Exec model, which simplifies a lot of control and security of the container’s lifecycle.
What Docker Does
Let’s see how Docker handles containers. If you’re familiar with Docker, you know there’s a daemon that should be running in the background in order for your commands to work. The Docker CLI interacts with the Docker daemon (either local or remote) to get things done. This daemon takes care of a lot of things like:
- Pull and push images from an image registry
- Manage layers of the containers
- Build containers
- Run containers
- and a lot more
In case, for some reason, the Daemon is interrupted, you would lose contact with all the containers in your system. Or, in case your daemon is running perfectly fine, you’re still utilizing a lot of your system resources that could be used otherwise.
That makes the solution obvious: run containers without a daemon. But Docker can’t do that.
Also, I would like to make it clear that running containers is all a software should do to be called a Container Runtime. Whether it does other things besides that — like image management or building containers — does not matter. But both Podman and Docker do a lot more. To understand my above comment, you might want to take a look at this blog post series that talks about low-level and high-level container runtimes.
What Podman does
Unlike Docker, Podman is a Daemonless Container Engine. Hence, you don’t need to worry about a single point of failure or a process owning the container processes. Podman directly interacts with: the image registry, the container, image storage, the Linux kernel through runC container runtime process, all with no daemons involved. You can see this in the image below.
Podman does almost everything Docker does, and more. Various other tools for containers like Podman can be found Here.
You might try to search for a Podman repository, but you won’t find one. That’s because Podman (also called Pod Manager) is a command line tool that uses libpod library to do what we need. The Podman tool also resides in the same repository.
Note: Podman doesn’t need root permissions to run.
Now you might be thinking, “That’s all well and good. But, I’m familiar with Docker.”
Many times the investment of time in getting accustomed to a new tool/software is something that drags down the adoption of a new tool/software. With that in mind, Podman has decided to copy the commands of Docker. Usually people are worried about the learning curve, but there is nearly none with Podman if you know Docker.
Note: With RHEL8, RedHat has dropped official support for Docker as a Container Runtime. Instead, RHEL8 comes with support for Podman, Skopeo and Buildah. But, by default it’s not installed.
If you’re familiar with Kubernetes, you might be wondering if Podman can manage Pods. A quick note: a Pod is a group of containers sharing the same namespace; and YES, Podman can run pods besides containers, which I think is a great addition to a Container Runtime.
In a Kubernetes environment Podman can be much more useful compared to Docker.
Besides working like Docker, since Podman manages pods it has a few extra commands to enable it.
To use Podman on RHEL8, you just need to install it with the default package manager, yum.
```bash sudo yum update -y && sudo yum install podman skopeo buildah ```
Here we’re installing Podman, Skopeo and Buildah. After updating, you can update it to install just Podman and it still works.
That’s because Podman doesn’t depend on Skopeo or Buildah for any functionality. Although Podman implements a subset of features to imitate Docker, Buildah and Skopeo are more powerful in terms of functionality.
You could even create an alias for Podman and you’ll almost never notice the difference.
```bash alias docker=podman ```
For the above command to work as intended, you might have to shut down the Docker daemon, if running. Since both Docker and Podman can work with OCI format images, you don’t need to rebuild the images.
Note: Although both Podman and Docker can work with OCI images in a single system if both Podman and Docker are installed, you wouldn’t see the Podman images in Docker, and vice versa, since they store the images in a different location.
For a more detailed info on Podman, go through this blog post by Redhat.