minikube

Learning Kubernetes: Getting Started with Minikube

17500 VIEWS

· · ·

In 2012, the now well-known 12-Factor App manifesto (https://12factor.net/) was published. In 2013, Docker, arguably the Ronda Rousey of the container world (which is to say Docker is a platform that has some shortcomings, but is widely known and popular) was released.

*Note: As a prerequisite to reading this post, I would highly suggest taking the time to familiarize yourself with the concept of containers if you have not already done so.

In the time since then, the industry has started to embrace microservices architecture wholeheartedly, with numerous players entering the space. Kubernetes (k8s), the open source container orchestration software from Google, is perhaps one of the most well-known of these players. In this post, I will provide a brief recap of Kubernetes and dive through a quick example using Minikube (https://github.com/kubernetes/minikube), a VM setup consisting of a Linux VM, a container runtime, and a single-node k8s cluster.

So what exactly is Kubernetes? After more than a decade of experience running containers with the internal services Borg (https://research.google.com/pubs/pub43438.html) and Omega(https://research.google.com/pubs/pub41684.html), an internal team inside Google decided to embrace lessons learned and release an open source container orchestration tool known as Kubernetes (which is the Greek word for “helmsman”).

A few key terms to keep in mind when discussing applications deployed in Kubernetes (especially within the context of this tutorial) are:

  1. Pod, which refers to one or more containers along with their attached volumes
  2. Service, an abstraction which groups together pods based on labels or other characteristics
  3. Deployment, a resource that defines a stateless app with a certain number of pod replicas
  4. Ingress, a resource that exposes cluster applications to external traffic

A typical Kubernetes cluster consists of a control plane (often deployed separately from the cluster) and the cluster’s nodes.

The control plane typically consists of:

  1. an etcd cluster – etcd is a distributed key-value store (https://github.com/coreos/etcd) used to house all vital information about the Kubernetes cluster.
  2. 1+ apiserver instances – the scalable Kubernetes API
  3. a kube-controller-manager – a binary with multiple controllers, components of the cluster responsible for checking the status of nodes, pods, etc.
  4. a scheduler – the component of the cluster actually scheduling pods to run on particular nodes

Each node, or worker machine, consists of:

  1. docker or another container runtime such as rkt
  2. kubelet – the node agent watching for and running pods
  3. kube-proxy – a networking proxy that also runs on each node

When you put this all together, you get something like the following:

Note that there are several other types of Kubernetes resources as well as many different configuration options for cluster networking. Feel free to check out the latest documentation for more information on Kubernetes. This talk also covers the various networking options.

With that in mind, let’s begin!

First, navigate to the Minikube page and check out the installation instructions for the latest release (https://github.com/kubernetes/minikube/releases). Mac users, for example, should download the binary, tweak its permissions, and move it to their local binary folder:

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.15.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Now download kubectl, the Kubernetes command line tool, so that it can communicate with your newly created cluster:

$ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.5.1/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Start your Minikube cluster:

$ minikube start

If everything worked as intended, you should see the following:

Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

Kubectl is now configured to use the cluster. Note that, similar to many other CLI tools, the
—help flag can be used alone or with a particular command or subcommand.

Let’s start by first checking how many nodes are up:

$ kubectl get nodes
NAME       STATUS    AGE
minikube   Ready     1h 

As we expected, this is a single-node Kubernetes cluster.

A common command used to find more information about various resources in the cluster is kubectl describe. Let’s use it to determine more information about the single node:

$ kubectl describe node minikube
Name:               minikube
Role:
Labels:               beta.kubernetes.io/arch=amd64
               beta.kubernetes.io/os=linux
               kubernetes.io/hostname=minikube
Taints:               

Now let’s take a look at the pod resource.

$ kubectl get pods
No resources found.

As expected, we don’t have any pods deployed to the cluster yet! Let’s deploy a small web server (https://hub.docker.com/r/andygrunwald/simple-webserver/) as an individual pod in the cluster:

First, create a server.yaml pod spec file. This will contain all the configuration options for the pod:

$ vim server.yaml
apiVersion: v1
kind: Pod
metadata:
  name: testserver
spec:  # specification of the pod's contents
  restartPolicy: Never
  containers:
  - name: hello
    image: andygrunwald/simple-webserver
    ports:
      - containerPort: 8082

Now, let’s deploy this single pod:

$ kubectl create -f server.yaml
pod "testserver" created 

Check the status of the pod!

$ kubectl get pods
NAME         READY     STATUS    RESTARTS   AGE
testserver   1/1       Running   0          4s 

And let’s take a closer look:

$ kubectl describe pod testserver

Name:          testserver
Namespace:     default 

Now what happens if we delete this pod? Try it:

$ kubectl delete pod testserver
pod "testserver” deleted ,/pre>

As you can see, the pod is completely gone: 

$ kubectl get pods
No resources found.

In general, however, most users do not deploy an application as a single pod on Kubernetes. To take full advantage of the power of Kubernetes’ controllers and schedulers, let’s deploy the same test server as a deployment:

$ vim server_deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: testserver
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: testserver
    spec:
      containers:
      - name: testserver
        image: andygrunwald/simple-webserver
        ports:
        - containerPort: 8082

$kubectl get deployment
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
testserver   1         1         1            1           6m

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
testserver-3557869629-4xhzn   1/1       Running   0          5m

Now let’s see what happens when the individual pod is deleted:

$ kubectl delete pod testserver-3557869629-4xhzn
pod "testserver-3557869629-4xhzn” deleted

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
testserver-3557869629-sg86x   1/1       Running   0          3s 

Aaannd it’s back! When applications are deployed as replicaset, deployment, or daemonset resources, the underlying contract is such that if a specific application pod is deleted (i.e. if a node suddenly goes down), the Kubernetes-controller-manager will detect a change in the status of the pod during the reconciliation loop and subsequently recreate the pod. The scheduler will then schedule a pod to another node. Applications deployed in this manner are understood to be stateless, and can be killed and recreated with ease. (Note, however, that there is a resource—the statefulSet—which is understood to have state.)

But what if we want to access or curl this simple test server from outside of Minikube? As I had mentioned before, there are many ways to set up cluster networking , and also expose an in-cluster application to external traffic: (https://kubernetes.io/docs/user-guide/connecting-applications/#exposing-the-service) (https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types).

In the case of Minikube, however, we will be using the NodePort spec of the service resource. Remember that the service is essentially an abstraction that groups together pods based on certain properties. For our Minikube example, the service will be exposed on the underlying node’s IP address. The Minikube IP address is exposed to the host machine at a specific and static port—the NodePort.

$ vim server_svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: testserver
  labels:
    run: testserver
spec:
  type: NodePort
  ports:
  - port: 8082
  selector:
    run: testserver

$ kubectl create -f server_svc.yaml
service "testserver” created

$ kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   10.0.0.1             443/TCP          6h
testserver   10.0.0.254          8082:30718/TCP   7s

$ kubectl describe service testserver
Name:               testserver
Namespace:          default
Labels:               run=testserver
Selector:          run=testserver
Type:               NodePort
IP:               10.0.0.254
Port:                    8082/TCP
NodePort:               30718/TCP
Endpoints:          172.17.0.3:8082
Session Affinity:     None
No events.

Note that for this example, we are grouping together pods based on the label selector.

testserver

Yay! Let’s try curl’ing the application endpoint:

$ minikube ip
192.168.99.100
$ curl 192.168.99.100:30718
See Other.

And it works! Once again, if the pod is killed, it is quickly recreated. And due to our test server service and exposed NodePort, we can still curl the server endpoint from outside the cluster.

$ kubectl delete pod testserver-3557869629-fc8nd
pod "testserver-3557869629-fc8nd” deleted

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
testserver-3557869629-23gvq   1/1       Running   0          3s

$ curl 192.168.99.100:30718
See Other.

Let’s try to scale up our single-instance web server. Modify the deployment so that the replica is now set to 3:

$ vim testserver_deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: testserver
spec:
  replicas: 3
  template:
    metadata:
      labels:
        run: testserver
    spec:
      containers:
      - name: testserver
        image: andygrunwald/simple-webserver
        ports:
        - containerPort: 8082

Now, let’s redeploy this app:

$ kubectl replace -f server_deployment.yaml
deployment "testserver” replaced

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
testserver-3557869629-23gvq   1/1       Running   0          10m
testserver-3557869629-c84z6   1/1       Running   0          5s
testserver-3557869629-m78gw   1/1       Running   0          5s 

Our service with the exposed NodePort is still here:

$ curl 192.168.99.100:30718
See Other.

And we can still curl the server endpoint! This time, however, the traffic is experiencing service-level load balancing across our three pods.

It should be known that there are limitations to this setup. Using the service NodePort exposes a port between 30000 and 32767. This becomes untenable as the number of the nodes in the cluster increase, and is perhaps not the most secure.

As a result, within real-world productionized clusters, users often utilize the load balancer or ingress and ingress-controller primitives to expose external traffic to the cluster. This is often done in addition to deploying in-cluster DNS that provides service-to-IP address mapping. This blog post provides an excellent description of utilizing ingress and ingress controllers

To summarize, today you:

  1. Learned about the basic structure of a Kubernetes (k8s) cluster.
  2. Learned to get up and running with a simple local cluster, Minikube, and the official k8s CLI tool, kubectl.
  3. Created and destroyed both a pod resource and a deployment resource.
  4. Created a service and used the NodePort to expose the application to external traffic.

To continue your Kubernetes journey, I would highly suggest going through the basic Kubernetes interactive bootcamp. In addition, take a look at the Kubernetes blog. With some of the latest releases, it is becoming increasingly easy to deploy and maintain Kubernetes clusters on your favorite cloud providers.


Sneha Inguva is an enthusiastic software engineer currently working on building developer tooling at Digital Ocean. She has worked at a variety of startups in the last few years and has a unique perspective of building and deploying software in eclectic verticals - education, 3d-printing, and casinos to name a few. When she isn’t bashing away on a project or reading about the latest emerging technology, she is busy rescuing animals or doing martial arts.


Discussion

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar