minikube

Learning Kubernetes: Getting Started with Minikube

2860 VIEWS

·

In 2012, the now well-known 12-Factor App manifesto (https://12factor.net/) was published. In 2013, Docker, arguably the Ronda Rousey of the container world (which is to say Docker is a platform that has some shortcomings, but is widely known and popular) was released.

*Note: As a prerequisite to reading this post, I would highly suggest taking the time to familiarize yourself with the concept of containers if you have not already done so.

In the time since then, the industry has started to embrace microservices architecture wholeheartedly, with numerous players entering the space. Kubernetes (k8s), the open source container orchestration software from Google, is perhaps one of the most well-known of these players. In this post, I will provide a brief recap of Kubernetes and dive through a quick example using Minikube (https://github.com/kubernetes/minikube), a VM setup consisting of a Linux VM, a container runtime, and a single-node k8s cluster.

So what exactly is Kubernetes? After more than a decade of experience running containers with the internal services Borg (https://research.google.com/pubs/pub43438.html) and Omega(https://research.google.com/pubs/pub41684.html), an internal team inside Google decided to embrace lessons learned and release an open source container orchestration tool known as Kubernetes (which is the Greek word for “helmsman”).

A few key terms to keep in mind when discussing applications deployed in Kubernetes (especially within the context of this tutorial) are:

  1. Pod, which refers to one or more containers along with their attached volumes
  2. Service, an abstraction which groups together pods based on labels or other characteristics
  3. Deployment, a resource that defines a stateless app with a certain number of pod replicas
  4. Ingress, a resource that exposes cluster applications to external traffic

A typical Kubernetes cluster consists of a control plane (often deployed separately from the cluster) and the cluster’s nodes.

The control plane typically consists of:

  1. an etcd cluster – etcd is a distributed key-value store (https://github.com/coreos/etcd) used to house all vital information about the Kubernetes cluster.
  2. 1+ apiserver instances – the scalable Kubernetes API
  3. a kube-controller-manager – a binary with multiple controllers, components of the cluster responsible for checking the status of nodes, pods, etc.
  4. a scheduler – the component of the cluster actually scheduling pods to run on particular nodes

Each node, or worker machine, consists of:

  1. docker or another container runtime such as rkt
  2. kubelet – the node agent watching for and running pods
  3. kube-proxy – a networking proxy that also runs on each node

When you put this all together, you get something like the following:

Note that there are several other types of Kubernetes resources as well as many different configuration options for cluster networking. Feel free to check out the latest documentation for more information on Kubernetes. This talk also covers the various networking options.

With that in mind, let’s begin!

First, navigate to the Minikube page and check out the installation instructions for the latest release (https://github.com/kubernetes/minikube/releases). Mac users, for example, should download the binary, tweak its permissions, and move it to their local binary folder:

Now download kubectl, the Kubernetes command line tool, so that it can communicate with your newly created cluster:

Start your Minikube cluster:

If everything worked as intended, you should see the following:

Kubectl is now configured to use the cluster. Note that, similar to many other CLI tools, the
—help flag can be used alone or with a particular command or subcommand.

Let’s start by first checking how many nodes are up:

As we expected, this is a single-node Kubernetes cluster.

A common command used to find more information about various resources in the cluster is kubectl describe. Let’s use it to determine more information about the single node:

Now let’s take a look at the pod resource.

As expected, we don’t have any pods deployed to the cluster yet! Let’s deploy a small web server (https://hub.docker.com/r/andygrunwald/simple-webserver/) as an individual pod in the cluster:

First, create a server.yaml pod spec file. This will contain all the configuration options for the pod:

Now, let’s deploy this single pod:

Check the status of the pod!

And let’s take a closer look:

Now what happens if we delete this pod? Try it:

In general, however, most users do not deploy an application as a single pod on Kubernetes. To take full advantage of the power of Kubernetes’ controllers and schedulers, let’s deploy the same test server as a deployment:

Now let’s see what happens when the individual pod is deleted:

Aaannd it’s back! When applications are deployed as replicaset, deployment, or daemonset resources, the underlying contract is such that if a specific application pod is deleted (i.e. if a node suddenly goes down), the Kubernetes-controller-manager will detect a change in the status of the pod during the reconciliation loop and subsequently recreate the pod. The scheduler will then schedule a pod to another node. Applications deployed in this manner are understood to be stateless, and can be killed and recreated with ease. (Note, however, that there is a resource—the statefulSet—which is understood to have state.)

But what if we want to access or curl this simple test server from outside of Minikube? As I had mentioned before, there are many ways to set up cluster networking , and also expose an in-cluster application to external traffic: (https://kubernetes.io/docs/user-guide/connecting-applications/#exposing-the-service) (https://kubernetes.io/docs/user-guide/services/#publishing-services—service-types).

In the case of Minikube, however, we will be using the NodePort spec of the service resource. Remember that the service is essentially an abstraction that groups together pods based on certain properties. For our Minikube example, the service will be exposed on the underlying node’s IP address. The Minikube IP address is exposed to the host machine at a specific and static port—the NodePort.

Note that for this example, we are grouping together pods based on the label selector.

Yay! Let’s try curl’ing the application endpoint:

And it works! Once again, if the pod is killed, it is quickly recreated. And due to our test server service and exposed NodePort, we can still curl the server endpoint from outside the cluster.

Let’s try to scale up our single-instance web server. Modify the deployment so that the replica is now set to 3:

Now, let’s redeploy this app:

Our service with the exposed NodePort is still here:

And we can still curl the server endpoint! This time, however, the traffic is experiencing service-level load balancing across our three pods.

It should be known that there are limitations to this setup. Using the service NodePort exposes a port between 30000 and 32767. This becomes untenable as the number of the nodes in the cluster increase, and is perhaps not the most secure.

As a result, within real-world productionized clusters, users often utilize the load balancer or ingress and ingress-controller primitives to expose external traffic to the cluster. This is often done in addition to deploying in-cluster DNS that provides service-to-IP address mapping. This blog post provides an excellent description of utilizing ingress and ingress controllers

To summarize, today you:

  1. Learned about the basic structure of a Kubernetes (k8s) cluster.
  2. Learned to get up and running with a simple local cluster, Minikube, and the official k8s CLI tool, kubectl.
  3. Created and destroyed both a pod resource and a deployment resource.
  4. Created a service and used the NodePort to expose the application to external traffic.

To continue your Kubernetes journey, I would highly suggest going through the basic Kubernetes interactive bootcamp. In addition, take a look at the Kubernetes blog. With some of the latest releases, it is becoming increasingly easy to deploy and maintain Kubernetes clusters on your favorite cloud providers.

Do you think you can beat this Sweet post?

If so, you may have what it takes to become a Sweetcode contributor... Learn More.

Sneha Inguva is an enthusiastic software engineer currently working on building developer tooling at Digital Ocean. She has worked at a variety of startups in the last few years and has a unique perspective of building and deploying software in eclectic verticals - education, 3d-printing, and casinos to name a few. When she isn’t bashing away on a project or reading about the latest emerging technology, she is busy rescuing animals or doing martial arts.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu