DaemonSets in OpenShift and Kubernetes

2120 VIEWS

· · ·

There are times when running the same application on every node in a specific set of nodes in a cluster, or even the entire cluster is required. These are usually applications that provide some kind of system management functionality. Examples include storage processes which use local storage devices attached to the nodes, and log consolidation and diagnostic services which often feed data into an AIOps solution.

Why DaemonSets?

DaemonSet was created to address a specific need that was previously either handled with an application that would use a custom controller, or more often was managed at the system level outside of the Kubernetes cluster. This was either as a manifest under kubelet’s control if it was a container, or as a systemd-managed service. By having these system applications run as part of the cluster, nodes can be added and removed and all the processes they need to run will be automatically applied. This allows for the infrastructure components to be extremely generic and to be reused across any cluster and not need custom deployments scripts per cluster.

The one real caveat with using a DaemonSet in the pure upstream Kubernetes is that its pods are created and scheduled by its own DaemonSet controller, which does not take into account any pod priority and preemption rules that the default scheduler may be using. When using Red Hat OpenShift it enables ScheduleDaemonSetPods by default, which makes DaemonSets scheduled by the default scheduler so they respect these rules.

Creating a DaemonSet

Creating a DaemonSet is much like any deployment in Kubernetes. It is defined as a YAML file and installed with either kubectl apply,  kubectl create, or by using oc create if you are using OpenShift.

In addition, if you are using OpenShift you need to disable the default node-selector configuration in the namespace you are planning to use for the deployment of the DaemonSet. This needs cluster-admin access.

$ oc patch namespace myproject -p \

    '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'

The actual YAML file contains the api version and kind DaemonSet, and some metadata to define things like the name of the DaemonSet. The additional information that goes under spec is essentially a standard pod deployment – just embedded under spec instead of being its own YAML file. If there is a node-selector listed in the spec section in the YAML defining the DaemonSet, then the deployment will be limited to nodes with that label applied; if there is no node-selector, then the DaemonSet will run on every node in the cluster.

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  name: broadcom-daemonset

spec:

  selector:

      matchLabels:

        name: broadcom-daemonset

  template:

    metadata:

      labels:

        name: broadcom-daemonset

    spec:

      containers:

      - image: openshift/hello-openshift

        imagePullPolicy: Always

        name: registry

        ports:

        - containerPort: 80

          protocol: TCP

        resources: {}

        terminationMessagePath: /dev/termination-log

      serviceAccount: default

      terminationGracePeriodSeconds: 10

And then you create the DaemonSet:

$ oc create -f daemonset.yaml

Finally, verify it is running on all nodes:

$ oc get pods -o wide | awk '{print $1" "$2" "$3" "$7}'

NAME                     READY STATUS  NODE

broadcom-daemonset-5tqwl 1/1   Running compute-1585532381-000003

broadcom-daemonset-s9j6k 1/1   Running compute-1585532381-000001

broadcom-daemonset-tc64g 1/1   Running compute-1585532381-000002

broadcom-daemonset-w75mg 1/1   Running compute-1585532381-000000

Additional Information

DaemonSets can be deleted at any time and can also be updated using the rolling update strategies like any other pod deployment.

Communicating with the pods in DaemonSets can be configured in four different ways. Like all pods, it can be configured to be accessed via Node IP with a known port, which is less flexible than using either a headless or regular service. The final way is the pods can be push-only, which is where the pods in the DaemonSet only send information to other pods in the cluster. This is common when the DaemonSet is collecting and aggregating data like log files.

For more information there are multiple sites to get detailed information on DaemonSets including the official Kubernetes documentation and in Red Hat OpenShift’s documentation.


Vince Power is an Enterprise Architect with a focus on digital transformation built with cloud enabled technologies. He has extensive experience working with Agile development organizations delivering their applications and services using DevOps principles including security controls, identity management, and test automation. You can find @vincepower on Twitter. Vince is a regular contributor at Fixate IO.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar