Maybe you have already heard the terms “Programmable Infrastructure” and “Infrastructure as code”. But even if you have not, if you are interested in DevOps it’s something you need to know. Most new lingo causes me to roll my eyes. But in this case, it’s not just new lingo, it’s an entirely new concept. Besides making the nerdy feel cool when they say it, these terms describe a new idea of what an application consists of.
— As originally posted on DevOps.com —
Oh great yet another term for something we have done for years. Often I decide to use a term just to stay relevant, so I start calling the “Web” the “Cloud” just so I can have a conversation with someone about it. Sometimes I surprise myself and fully latch onto a term after I spend time getting to know it, and what it really means. But rarer yet do I find terminology that really makes me re-think how things are done. Programmable Infrastructure and Infrastructure as Code are such terms.
Infrastructure as Code visualized
OK, maybe it’s not entirely new
The idea of automating things is not new. Hell I’ve been using BAT files and more recently PowerShell scripts to do all sorts of things on my infrastructure for many years now. But when this automation meets virtalization, and languages especially designed to describe the orchestration of infrastructure. We are not just talking about re-packaging technologies and calling them something new. We are talking about a whole new concept that will change who does it, how they do it, and how effective they can be. So what is Programmable Infrastructure? Programmable Infrastructure and Infrastructure as code are synonymous. I learned of “Programmable Infrastructure” from an analyst at Gartner, which I suspect is the origin, and “Infrastructure as code” from the land of blogs. The later uses an “as ______” which seems to be the magic recipe for creating a new term these days. What they both mean is that, instead of manually configuring infrastructure you can write scripts to do it. But not just scripts, you can actually fully incorporate the configuration in your applications code. This has been possible for a long time, but it has been limited. You almost always hit a wall on the types of things you can do. But new tools such as Vagrant, Ansible, Docker, Chef, and Puppet either independently or when combined together make it possible to do just about anything you could do manually, automatically with the infrastructure layer, and operating system layer. Even once you have created the infrastructure you can use these tools to run yet more scripts, even your trusty old BAT and ps1 files on what you just built. It’s a kind of reflection for infrastructure orchestration.
Full control for the Developer
One could argue that this is just configuration management. And in a sense it is, just fully automated. But there is another big difference. It also means that the developer can do it all. Where as in the past all this orchestration and automation belonged to the sysadmin, now the developer at the very least has to advise, but in many cases fully take over the process. Making the relationship to application code and the infrastructure it runs on even tighter, almost indistinguishable. Now the developer can not only write the applications code, they can write the infrastructure that the code runs on. It’s not un-common in the web development world to see developers creating things like a “deploy.php” which lays out infrastructure for the entire application to run on. The script starts with creation of VMs and ends with a pull request from a source repository. My current opinion is that the organization should form a small team responsible for these scripts, as the developers time should be spent on bug fixes and enhanced functionality. But in any case there cannot be such a divide between IT and development to make this work.
Show me the value
Despite all the benefits of efficiency, and repeat-ability that you get with configuration management. Infrastructure as code offers other things. First of, Essentially the scripts can serve also as documentation for the applications infrastructure. This is a great side benefit. Because you have put the effort to create the scripts, its nice that it also reads like a network diagram for your infrastructure. Perhaps someone can take this to the next step and actually create laymen friendly content, diagrams and documents, from the scripts. Next because everything is initiated from the scripts you have a consistent place to start all deployments from. So you know that each deployment comes with the same flavor of infrastructure. This is the idea, but in practice I’ve seen many organizations pollute their scripts by running them on existing infrastructure, or making manual modifications to the configuration after the automation has run, and not being consistent about doing so. It also builds in infrastructure independence. Done correctly the scripts can be run on any cloud, public, private, hybrid, who cares. Once the world of software defined networks SDNs dominates the physical world of routers, there wont just be 90% independence it will be 100%. This independence is powerful for organizations to avoid cloud Lock-In and/or increase redundancy by running on multiple clouds. Not only that different use cases can be run on specific clouds. For example there could be an integration testing cloud that is separate from the production cloud which is separate from say a beta-release cloud. And finally because it’s repeatable, deployments can be more then just copying of assets and running code. Deployments can be the infrastructure as well. So with every deploy there is a new set of infrastructure that the code is deployed on. This avoids contamination. Helps QA avoid the “It worked on my machine” problem. And allowing the full suite of tests unit to integration to functional to be run on brand new instances of infrastructure. Which has the added benefit of being able to snapshot complete deployments, code and infrastructure, where issues occur. Instead of sending cryptic screenshots and text bug descriptions. To make it even more interesting most of the public clouds themselves all now have limited, or full blown REST based APIs that allow you to code against their Cloud. So I can make a REST call to my production cloud to spin up a front-end and database server, right before I do a pull from github to those servers.
A new set of challenges
There are a few gaps preventing Infrastructure as Code / Programmable Infrastructure from being the norm. They are:
- Still limited on the Microsoft Stack. The full world of flexibility comes with Linux infrastructure. While there is a lot of possibilities with the Microsoft stack , it’s still only say 40% there. This is due to the closed nature of Windows, but also the lack of interest to create tools specific to it. However when you talk about Azure PaaS it’s a different story. It’s safe to say that if you are interested in Programmable Infrastructure for the Microsoft Stack you are going to pay more, and you will face new sort of lock-in using VIX with VMWare or System Center Operations Manager SCOM for orchestration.
- Multi-tier applications make the scripting exponentially more complex. And adding networking and security as a component can make it a nightmare. When you start dealing with networked VMs and the setup and security of vLANs things get complex quickly when it comes to scripting out their deployments. A vLAN can very easily be treated like a VM itself, but the orchestration languages do not have robust enough support for them today to make it a full solution. The trend I see is finding ways to remove this variable from the applications system architecture.
- Infrastructure for the infrastructure tools has to be thought of first. It’s a weird problem to have. You have to figure out where and how your various orchestration tools are going to run, which requires infrastructure, or infrastructure planning to do. Chicken or egg, and so on. And sometimes it is really annoying to plan out the hyper-visor, installing agents on VMs, having security and access to create VMs, etc. This can be as simple as a small organizational change, or picking the right VM Templates on Azure or AWS. While other times development teams just lacks the means to use the tools they want too.
- The various tools still require specialization. Today all the tools that are available require a lot of effort to learn how they work, the syntax of their language, etc. This is often too large of a barrier for many organizations to spend time and money on.
- When applications are all PaaS or combination of PaaS and infrastructure either in the cloud or on-prem, it becomes harder to make all encompassing scripts, and you have to consider several scripts. This brings the added problem of combining their execution together.
These are the areas where I expect dramatic improvement in the market over the next two years. And i’m confident that it will all become easier. Eventually you will be surprised to hear that anyone is manually configuring any infrastructure. When I first started getting into virtualization, I was fond of saying “now whole machines are just documents”. And really this is all an application is as well, a collection of files, and the files have instructions that are executed in some order. So eventually the idea that an applications code and the infrastructure it runs on are separate things, is out the window. Everything is all inclusive I’m not sure if it will end up being “Programmable Infrastructure”, or “Infrastructure as Code”. But I do know the idea that applications can now contain the infrastructure they run on furthers the movement of DevOps and builds in a lot more flexibility than every thought possible.