I never anticipated this fangirl moment:
There were many opportunities to meet various Kubernetes project maintainers and industry stalwarts at the KubeCon + CloudNativeCon in San Diego. In addition to their presentations, Joe Beda and John Belamaric & Cricket Liu even gave away signed copies of their books while patiently answering queries at their respective booths. But for me, the highlight was the open-ended, freestyle Q&A with Brendan Burns; participation was limited, but it was open to everyone, and it lasted well over an hour and a half. I am grateful to the Microsoft Kubecon event organizers for giving us this rare opportunity.
Here is an example of our conversation about data and AI services:
At one time during the Q&A, Brendan started to theorize that stateful workloads like relational databases are better left in their non-container pre-cloud environments where they have performed exceptionally well over the years. If you had the chance to move them to cloud, he said, it would be better to try out the cloud provider’s specific databases as a service rather than containerize them indiscriminately.
I asked a follow-up question about whether this would mean that we would be limiting ourselves to traditional choices and preventing the adoption of new technologies built especially for the cloud (such as AWS, Aurora or Dynamodb), and Brendan clarified that this is where we would have to make an informed decision about locking in to a cloud provider’s technology or waiting for open technologies to catch up on such innovation.
Brendan also added that databases in their non-containerized forms (whether managed or self-supported) can be further managed with Azure Arc (which was announced on Nov. 5, 2019), a technology intended to extend the control plane of Azure to on-premises, hybrid-cloud, multi-cloud environments, and edge. Then he went to the whiteboard to illustrate.
Since Azure Arc can manage vms, stateful services can either migrate to the cloud or stay where they are without needing to be containerized. With this, I think Microsoft has an advantage over Google’s equivalent, Anthos (announced in April of 2019), which can only take up containerized workload management.
Brendan explained that they had looked into automation tools for containerizing legacy applications, but a lot of the use cases ran into tricky situations, and so they didn’t pursue this path further. I think Google wins out here, since it provides Migrate for Anthos (previously called Velostrata) to do just that.
Then, I asked him how to approach AI training workloads on Kubernetes, and he suggested that I try out KEDA and Kubeflow.
Of course, he discussed many other technologies during this extensive Q&A, such as Virtual Kubelet (AKS & ACI), OPA, SMI, Dapr, CNAB, Duffle, Draft, Helm and so on…but that’s another story for another day. Stay tuned!