When you start learning about Kubernetes, the first thing you need to do is set up a local development cluster for testing configurations and deployments. You usually begin by installing a Minikube in a virtual machine, which simplifies the learning process and lets you work out example scenarios.
However, when you’re ready to go to the next level (which is a production installation of a full Kubernetes cluster), you will need to start over from scratch. And depending on your business requirements, you’ll need to make sure to include controls for securing the confidentiality, integrity, and availability of the system.
This is a daunting process, and it requires reading the documentation and keeping detailed records. For those of you who still need an extra helping hand, we’ve compiled a list of recommendations for setting up your first production installation of Kubernetes.
Let’s get started.
Follow the Official Docs First
It’s very tempting to download and bookmark dozens of tutorials for installing Kubernetes and managing configurations. While this is helpful, it’s also dangerous, because these guides sometimes contain mistakes. In addition, these tutorials can be biased and promote tools that might not be suitable for your needs.
You should always consult the official Kubernetes docs, since they give you the complete and correct installation steps for each supported version of Kubernetes. Once you’ve mastered these steps, then you can explore the other resources.
Start with the minimal recommendation for master nodes (3 master nodes + 3 Control planes) using the stacked etcd topology. The setup is straightforward, and it’s not difficult to configure.
Start by installing etcd on each node and generating the certificates for encrypted communication. Once the etcd cluster is operational and healthy between the three nodes, then you can set up the K8s cluster using the kubeadm tool. You will have to provide an initial configuration, and the easiest way to get started is to inspect the examples from the following outputs:
$ kubeadm config print init-defaults
$ kubeadm config print join-defaults
You will have to provide the IP for the etcd nodes and the loadbalancer address. The following illustration depicts the example topology:
Image Source: https://kubernetes.io/docs
As long as the nodes can reach each other, all you have to do is install kubeadm on each one and then join them together. The tooling will take care of the rest.
For setting up the load balancer, you can use a variety of options like https://www.keepalived.org/ or http://www.haproxy.org/. This may be the most complex step in the installation process, but it sums up the whole experience.
Once you’ve deployed the master nodes, the hardest part is behind you. Now you can add the worker nodes and test that the orchestration is up and running.
Use the Best Hardware and Software for OS
When picking the OS and the hardware for the master and worker nodes, don’t be stingy. You’ll need a commercially supported OS like Ubuntu or RedHat with the latest Linux Kernel so that you can find support easier and faster in case of software failures. When choosing hardware, you should allocate high frequency CPUs, more CPU counts, and more memory for faster operational performance. Later on, you can use taints and tolerations to maximize usage.
You need at least 2GB of RAM and 2CPUs for each node. It’s best if you double or triple that to 8GB of RAM and 4-8CPUs – and that’s only for the basic services. If you want to add monitoring or logging, then you need to add extra memory. You will also need more RAM and more CPUs depending on the number of worker nodes you plan to deploy.
A fast SSD with RAID for the Etcd server and a fast 10GbE network for low latency node communication is a must.
Once you are proficient in installing a cluster using kubeadm, we recommend that you automate that process using Kops. Among other things, Kops makes upgrading easier, it has maintenance tools for production environments, and it can generate Terraform configurations for setting up clusters.
Terraform can considerably improve the process of automating the provisioning of Kubernetes clusters by using code that can be checked out in a version control system. It can deploy to AWS, GCE, and Digital Ocean, although Bare Metal support is best left to kubeadm.
So what are you waiting for? The best way to learn how to install a production level Kubernetes cluster is by getting your hands dirty. You will find tons of information in the documentation and also lots of support in the Kubernetes Slack rooms.