If you ask a network manager to describe Kubernetes, chances are pretty high that the word hell shows up in the description. This is because networking for containers actually is hell, especially since containers are “ephemeral” and come and go by the hour. There’s no physical way to apply networking services to something that you don’t know anything about, especially with regards to location and how long it’s going to stay at that location. This makes it very hard to maintain visibility across a network that’s already so complicated that it’s being referred to as a mesh. This is probably because it’s almost impossible to count the connections on a mesh screen, and that’s exactly what Kubernetes networking is like.
Moving On from APIs
Though API management has proven to be of some assistance with regards to managing the communication between the APIs that make up a microservice, API management just doesn’t cut it on modern high-performing apps that are using thousands of microservices. A platform dedicated to managing the service mesh is the need of the hour, and this brings us to Istio. Istio is a service mesh management tool built in a joint collaboration between IBM, Google and Lyft.
A service mesh is the layer where all the services of an application interact with each other. As their interactions increase and grow complex, new challenges arise, like load balancing, failure recovery, service discovery, and monitoring. Solving these problems requires a new approach to networking where the network itself evolves to be more accommodating. Simply taking a virtual workload and shoving it into containers, however, won’t solve the aforementioned issues.
Istio and the Service Mesh
Istio makes those problems go away, as it adds an extra layer of infrastructure dedicated especially to microservices communication. That layer sits between a service and the network that gives operators the controls they need, and frees developers from having to solve distributed system problems in their code.
Istio is being designed to run on any environment in any cloud, even though it only supports Kubernetes at present. Future updates aim to enable rapid and easy adaptation to other environments, such as VMs and Cloud Foundry. Integration with other platforms like Google’s Apigee is also in the cards. Google acquired Apigee last year, which is one of the leading API management tools right now.
Istio works by deploying an array of network proxies alongside your code that intercepts all network communication. These proxies are also referred to as sidecars, and your services are not aware of their existence. Since the sidecar proxies attach themselves automatically and are relatively easy to set up, Istio doesn’t have a very steep learning curve. The use of sidecar proxies also enables a gradual and transparent introduction without any major architectural or code changes.
Istio then automatically collects all service metrics, logs and call traces for traffic within a cluster, and configures and manages the service mesh accordingly. It does this using Istio’s control plane functionality to deliver the required service attributes, like fine-grained routing, load balancing, authentication, monitoring and more.
Breaking It Down
Additionally, It only takes a single command to install Istio, which in turn easily detects new services and includes them in the mesh, hence growing with your system. This is probably where we should mention that Istio is made up of four core components called Envoy, Mixer, Pilot, and Istio-Auth. While Envoy is the sidecar proxy we were talking about earlier, Mixer is in charge of network policies and access control. Mixer adds a pluggable policy layer to the mesh that supports fine-grained access controls, rate limits and quotas. Istio Mixer also ingests metrics from the service mesh and delivers them to backends like Prometheus.
Pilot manages traffic across services and provides routing rules, policy, and service discovery information to the service mesh. Istio-Auth handles end-to-end encryption and user authentication so that users have control of communication between services. Additionally, it can enforce authentication and authorization between any pair of services.
Freedom from Mesh Management
Service meshes like Istio (and Linkerd) empower users with control over the network, while at the same time decoupling them from the hassles of having to run it. This frees them to focus on important things like feature development and release processes and provides centralized management, regardless of the scale of applications.
Google has been using service meshes for well over a decade, providing high- performance services to entities like YouTube, Gmail, Cloud PubSub and Cloud BigTable. Eric Brewer, Vice President of Google Cloud, said, “Google’s experience is that having a uniform substrate for developing and operating microservices is critical to our ability to scale while maintaining both feature velocity and reliability.” By uniform substrate, he means a common microservice fabric, which is exactly what Istio delivers.
The initial (0.1) release was just announced at the Glue 2017 Conference, and it’s definitely a significant step toward containers being more network friendly. The biggest problem we see with transitioning to containers is developers bogged down by engineering and change management bottlenecking around a monolith, and Istio looks to fix that.
The fact that it’s open source is just icing on the cake, especially when the entire industry has spent years innovating through copyrights and patents. Open source has turned things upside down, and the best tools and applications today are all open source. Learning from one another, copying one another and doing everything possible to not duplicate efforts is now the name of the game.
Phil Calçado, who just joined Buoyant from DigitalOcean, said, “We need this code that each company writes over and over again to be as commonplace as the TCP/IP stack present in every operating system.” A great example would be the relationship between Linkerd and Istio, two different applications that would be bitter rivals in the old world—but here today, they’re both part of the CNCF, work great together, and both teams are doing everything they can to make sure both applications continue to complement each other.