Applications that are built to run and scale in the cloud need to be managed in a completely new way. DevOps is the new way of building and shipping applications in the cloud—but this is easier said than done.
It’s easy to make mistakes when doing DevOps. Given all the complexity of modern applications, their distributed infrastructure, large volume of traffic, and new tools and technologies, DevOps adoption can go wrong in a million ways.
That’s why automation is essential. By building automation into every stage of your SDLC, you can ensure your transition to DevOps is successful.
DevOps calls for a shift-left mentality
One of the biggest advantages after moving to a DevOps approach to software delivery is speed. Faster releases enable better innovation. However, DevOps is not speed alone—It is equally about quality. Building in quality from the start results in more stable, reliable, highly available applications. The reason DevOps is able to bring both speed and quality gains over traditional models of development is its shift-left approach.
What does shift-left mean? In the traditional Waterfall model, an idea typically starts upstream from product and business teams, then moves to development where it takes the form of code. Then it is passed on to QA for testing. There are numerous back-and-fourths between these three steps until the feature reaches an acceptable level of quality. Then, it’s thrown over the wall to IT. They deploy it, or send it back for more changes if it’s not ready to deploy. Releases go wrong because of last-minute compatibility issues, bugs, security vulnerabilities, and more. Often, to everyone’s surprise, the app is more unstable and error-prone in production than it was in testing. But this is an expected result if quality is not baked in from the start. Shift-left breaks the rigidity in Waterfall’s linear left-to-right movement, and brings testing to the very first step. It forces not just QA but other teams like Product and Development to also think about the reliability of the features they build.
What this means, practically, is that whenever an idea or new feature is suggested, all teams— Product, Dev, QA, and Ops—come together to discuss the feasibility of the feature, what considerations should be kept in mind, and the minimum expectations of the feature. Two concepts that enforce this are Behavior-Driven Development (BDD), and Test-Driven Development (TDD). BDD uses user behavior as the standard to define what success means for a feature. TDD defines a number of unit tests which the feature must pass if it’s to be considered acceptable. In both these approaches, the teams define what success means for every feature right at the start, before it is built.
Walmart uses BDD to shift-left and adopt continuous integration (CI) and continuous delivery (CD). They describe the approach, where they define the automated unit tests before development. They then write basic code that fails most required tests at the start. And then, over every iteration, they reduce the number of failed tests until all tests are passed. Once this happens, the feature is almost ready for deployment.
There are many benefits to the shift-left methodology. It makes quality a priority for everyone, not just for QA and Ops. It saves cost and effort, and prevents a bad user experience by ensuring bugs are caught early on in the cycle. It forces Product, Dev, QA, and Ops teams to work together at the start to define what success means for the product or individual features. This helps break silos between them, as they use their unique expertise to solve common problems.
Shift-left requires automation at every stage
For shift-left to become a reality, it takes automation at every stage of the pipeline. At the start, you need a CI tool like Jenkins to automate builds, and trigger automated unit tests. The feedback loop at this stage should be quick, giving developers instant feedback on the quality of their code. Once the basic unit tests have passed, QA performs more advanced functional testing, load testing, and more. At this step, mobile brings in new challenges—QA needs to ensure that the code passes functional tests across multiple platforms and devices. This is where a test automation tool like Sauce Labs, which also has a real device testing cloud, is essential.
With the increased number of tests, and most of them automated, you’ll also need to automate another key aspect—infrastructure creation. Unlike when VMs were the default choice for running these tests, today containers are the norm. They are lightweight, easy to create, and can be scaled much more efficiently than VMs. Just as containers are new, so are the tools to automate their creation and management. Kubernetes has emerged as the most powerful container orchestration tool, and is enjoying support from every corner of the container ecosystem. It enables you to separate the infrastructure layer from the application layer, so you can easily scale creation of testing and production environments based on demand. The advantages are numerous—faster testing because environments are ready much sooner, and more predictable test results (as everything is automated and not prone to human error). Finally, you’ll experience faster deployments, which is just a result of taking quality and automation upstream.
Automate security in the cloud
Containers are vulnerable to attacks from outside and a lack of oversight from within. As you automate testing and deployments, it becomes even more important to ensure you don’t fall short on security. Containers operate at a much larger scale than VMs, and securing them is naturally more complex. Unlike VMs, which can be secured manually by configuring each instance with the same security parameters, containers require policy-based security that treats each container in a unique way depending on what it contains, and how it interacts with other containers in the system.
To meet the new security needs of containers, Twistlock has added two new features to its container security platform. First, there’s the Cloud Native Application Firewall (CNAF), which handles automatic routing of traffic to containers. It integrates with the CI process and creates security policies to ensure that only scanned and safe traffic reaches containers. To scan large volumes of traffic in real time, it leverages machine learning algorithms. As a result, it automatically blocks traffic from dangerous sources like Tor, DDoS, and botnets. It reports vital details on vulnerabilities, including suspicious IPs and malware signatures. When running containers, security is critical, and CNAF provides much-needed threat detection during runtime.
Despite all your attempts to secure containers, vulnerabilities are bound to creep through at some point. What if one out of the millions of containers you run is compromised? That’s when you need to do damage control and restrict the perimeter of the breach to as few containers as you can. Twistlock has introduced the Cloud Native Network Firewall (CNNF) to do just this.
It is a container-to-container firewall that scans and controls access between containers. When it identifies a compromised container, it restricts that container’s access to the rest of the system. CNNF also uses machine learning to identify normal traffic patterns between containers, and can spot deviations, no matter how large the volume of traffic gets.
Features like CNAF and CNNF are what container security needs. Like the rest of your development pipeline, manual can only go so far. To truly operate at massive scale with containers, you need to automate security.
As you prioritize quality and speed, you’ll see quality moving to the left of your pipeline. This shift-left approach is built on strong automation from the very start. Automated builds and automated testing should trickle down to automated infrastructure management. And finally, you can’t ignore security measures for containers. Just as with the rest of the pipeline, your security should be automated with tools like Twistlock’s CNAF and CNNF.