What The Cloud-Native Developer Needs to Know About Quality

1800 VIEWS

· ·

How do you manage software quality in today’s age of dynamic, complex, cloud-native applications?

Part of the answer lies in adopting QA tools and processes that can meet the unique challenges of cloud-native architectures, of course. But the burden doesn’t lie with QA engineers alone. Developers must also do their part to help ensure that the cloud-native applications they design and build are feasible for QA teams to work with.

Toward that end, here’s a primer on what cloud-native developers can do to help improve application quality.

Software quality in a cloud-native world

If you’ve ever helped build, test, or deploy a cloud-native application—meaning one that is deployed on distributed infrastructure and usually designed using a microservices architecture—you’re already familiar with the special challenges that it poses in the realm of software quality.

Those challenges stem from the fact that the services in a cloud-native app are simpler while the infrastructure requirements and interactions between services are much more complex than their monolithic, on-premises predecessors. Microservices mean that there are more moving parts to manage. Each part must be tested individually, but at the same time, the parts must also be tested as a whole to ensure that they interact as required.

In addition, distributed hosting environments (which involve multiple host servers as well as software-defined layers of networking, storage, and other infrastructure) introduce more variables that must be tested to guarantee software quality. Instead of worrying only about how physical hardware impacts an application, QA teams in a cloud-native world must also account for how a scale-out storage system, or a container runtime and orchestrator, affect application quality and reliability.

Developers’ role in cloud-native QA

The cloud-native paradigm has been prevalent for several years, and many QA teams have adapted to it. They use parallel testing and automated testing to help handle the added complexity of cloud-native apps. They rely in some cases on headless testing tools that help perform testing quickly, especially in environments without GUI interfaces. They run tests on cloud-based test grids, so that tests can run at the speed and scale of the cloud-native apps they are supporting.

These strategies are all well and good. At the end of the day, however, there is only so much that QA engineers can do to help guarantee software quality for cloud-native applications. Developers must also do their part to ensure that the cloud-native applications they produce can be properly tested by QA before they are delivered, and that problems detected during testing can be addressed efficiently.

Keep reading for some best practices that developers can follow to make this happen.

Branch intelligently

In a complex, cloud-native application delivery pipeline, branching is critical to helping maintain sanity. Branching refers to the separation of a codebase into different branches, or segments, depending on the intended purpose of each segment. For example, it is common to have one branch of your application code for production use, and another “development” branch where developers are testing new features that are not yet ready for production use.

By branching complex codebases intelligently, developers can make it easier for QA teams to see through the complexity of a sprawling, microservices-based codebase and determine which items deserve greatest priority. For example, a production branch would typically require more thorough testing than a development one, since only the former is intended to reach end-users in its current state.

Decide what to support, and what not to support

In theory, part of the point of cloud-native is to make applications infrastructure-agnostic. If your application can run in the cloud, then it should be able to run anywhere, without requiring specific types of host operating systems or servers.

Being able to “build once, run anywhere” is a laudable goal. However, the reality is that most businesses are going to deploy their applications only within specific environments, and they know what those environments are ahead of time. Maybe your team is going to host its cloud-native apps on AWS using EC2 instances based on AMI images. Or maybe you are going to deploy to use containers managed by Kubernetes via AKS on Azure.

Identifying specific types of deployment environments that an application needs to support is critical for keeping QA testing feasible. If developers expect QA to test their applications on every possible type of infrastructure and operating system out there, they are likely to be disappointed, unless they are working within a large enterprise that can support massive QA operations. For everyone else, limiting application deployment strategies to specific types of environments will help ensure that the QA team can focus on testing for those environments.

Developers may periodically decide to change their deployment environment, of course. That’s fine. QA engineers can adapt to this change, as long as they are kept in the loop. But QA engineers can’t thrive in an environment where developers expect them to be ready to support every type of deployment environment out there.

Enable API mocking

Most cloud-native, microservices-based applications rely heavily on APIs to allow the various components of an application to interact with each other. For this reason, testing how different parts of an application interact via an API is critical for ensuring that the application performs as expected.

Unfortunately for software testers, it’s not always possible or practical to issue actual calls against a “live” API during testing. Instead, QA engineers need to rely on API “mocking,” which allows them to simulate an API.

There are various API mocking and testing tools available to help QA teams with this task. However, developers must ensure that the apps they develop support those tools. Depending on the API protocol an app uses, the languages or frameworks the app is written in and the way the app is architected, not all mocking tools will work with it.

The point here is that cloud-native developers should communicate with QA engineers to determine what their API mocking needs are and make sure that, to the extent feasible, applications are designed and written in a way that enables the QA team to do whichever levels of API mocking it requires during testing.

Enable fast feedback between QA and developers

When QA tests reveal a problem within cloud-native code, the QA team must be able to explain the problem to developers efficiently, so that it can be addressed without holding up the delivery pipeline.

Specific means of doing this include strategies such as GitOps, which lets developers, QA teams, and other application stakeholders rely on Git as a central communication tool for discussing and tracking application changes. Chatbots, which can automate announcements about problems like a failed test, can also help enable efficient QA-developer communication.

Manage everything as code

Last but not least, developers can simplify the work of the QA team by adopting an “everything-as-code” strategy. This means relying on code not just to write the actual application, but also to configure environments and deploy applications (as opposed to configuring and deploying applications manually).

When everything within the application delivery pipeline is defined as code, it is easier to test, because QA engineers can look at configuration files and see exactly what they need to test for. In addition, managing all operations using code can make it easier to write automated tests that use those same configurations to determine which conditions to test for.

Conclusion

QA engineering in a cloud-native world is hard work. Cloud-native developers can make it a little easier by designing application architectures and pipelines that lend themselves to efficient, predictable QA testing.

http://www.fixate.io

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.


Discussion

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar