It’s easy to think that newer technology will magically succeed where older technologies have failed. The ground-reality, however, is something quite different. You usually end up with a hybrid mix of a few different technologies that don’t always work well together. It lives up to the modern definition of a hybrid environment: you already have multiple public clouds, private clouds, and on-premise data centers in the mix. Throw in edge computing and you’ve basically added another layer of infrastructure to an already complex environment.
Edge Infrastructure Challenges
Edge computing today is still in its infancy. In addition to it being difficult to find skilled engineers at this point, it’s also difficult to ensure the edge integrates well with the rest of your infrastructure, and doesn’t become “siloed.” Edge locations typically do not have the technical staff to handle challenges that come up only at the edge. Often zero-touch remote operations would be needed for everything starting from bare metal, networking, the orchestration stack, and applications. This is a hard problem to solve, but it can be done with a solution that can remotely manage bare metal provisioning and treat it like “cloud”.
To add to that, the cloud isn’t going anywhere, and neither is the on-prem data center; so while there are a number of specialized edge platforms that let you bring your cloud capabilities to the edge, they’re vendor-specific and don’t support a mix of multi-cloud, or on-premise deployments. You also need a platform that’s agile and flexible enough to manage the limitless resources of the cloud on one side, the very finite edge resources on the other, and then literally everything else in between.
Everywhere you look it says edge computing decreases latency by making sure a lot of the heavy lifting happens at the edge, which is great. What it doesn’t talk about, however, are the trade-offs in scalability and stability; both of which can drive latency up pretty fast. Having to integrate with overloaded malfunctioning edge computers that are typically ill-maintained and underpowered when compared with cloud resources, can put a considerable strain on the rest of the system.
Additionally, while edge computing certainly takes the load off the central network, it adds back a lot more in the form of complexity with thousands of interconnected nodes that all need to perform independently at the same time. What you then need is a platform scalable enough to manage the limitless resources in the cloud, yet conservative enough to maximize resource usage at the edge. Yes, we’re obviously talking about containers, orchestration and the de-facto standard: Kubernetes.
The Platform for Platforms
If you have a cloud platform, an on-premise solution, something else running your legacy equipment and then an IoT platform, your IT staff is probably close to calling it quits. This is because conventional approaches are based on centralized management and coordination, and cannot sustainably deliver geo-distributed databases that are consistent. Kubernetes, on the other hand, offers the ability to deploy an entire stack anywhere – consistently and across distributed environments. This includes public clouds, on-prem centers, and the edge.
In addition to an endless range of infrastructure APIs that let you access and configure almost every element of a distributed environment, it also gives you one single control plane for everything, which is something you can’t really put a price on. Kubernetes also has a range of resource types that are wide enough and “configurable” enough to accommodate an extra layer of edge infrastructure. Add that to the fact that it can scale virtually limitlessly, and you have a real argument for Kubernetes as the platform for all platforms.
Despite all the benefits of Kubernetes, there is still the challenge of managing hundreds of Kubernetes clusters running in distributed locations. What’s required is a centralized management plane that can oversee and provide consistent governance, visibility, and RBAC controls. This is no easy task.
Taming the Beast
It’s general knowledge that the more powerful and versatile a tool is, the harder it is to use and configure; and Kubernetes is very powerful and very versatile. Imagine sitting in the cockpit of an advanced aircraft with no previous training or experience. That’s what it feels like for IT staff that are new to Docker, containers, Kubernetes, Flannel, Istio and Prometheus. In fact, you can probably learn to fly the aircraft a lot quicker than you can figure out how to deploy and maintain a Kubernetes powered hybrid “edge” environment on your own.
So, while it’s definitely possible to go the DIY route and string together all the required open source tools and components, a lot of organizations are finding value in deploying Kubernetes as a managed service. This not only reduces the learning curve but also simplifies “Day 2” operations, which is where the majority of the heavy lifting resides. Depending on the vendor, there are also different types of Kubernetes services ranging from ones where the infrastructure is provided by the vendor, to services where you can choose your own hybrid infrastructure.
Optimize Against High Latency
Latency is all about overload, either in the network or somewhere in the database. Kubernetes fights high latency by allowing users to define highly-specific network policies to control traffic between pods, as well as to and from external sources. Additionally, though this isn’t an out-of-the-box feature and needs to be configured, Kubernetes supports the deployment of HA clusters along with load balancing and etcd storage, all of which are key elements to driving down latency.
In conclusion, with edge servers increasing in power and performance almost on a daily basis and the world of IoT devices growing larger and more diverse than ever, enough emphasis cannot be placed on the term “future compatible.” This is probably the icing on the cake in terms of what Kubernetes can do for IoT and the edge. It effectively leaves the doors open for any kind or number of new devices to be accommodated at any time in the future.