Best Practices for Shifting Performance Testing Left

2276 VIEWS

·

When following the Agile model for development, work flows from left to right. On the left, there’s requirements gathering, design, and development. On the right, meanwhile, there is testing and production. The further right the development cycle moves, the more it costs to fix a bug. It costs $1 to fix bugs in the requirements gathering phase, almost $100 in development, and upwards of $10,000 if you catch it in production. That $10,000 could easily wipe out a single day or even a single week’s worth of profit!

One way to keep this snowball effect from happening is to automate the testing process. By testing regularly throughout the development process, you greatly reduce the number of bugs creeping in. Pushing unit and functional tests to the left is a no-brainer. It’s low effort and high returns. But what about performance testing? Surely there’s no need for performance testing at such an early stage.

Wrong! Users these days not only consider how unique and beautiful your product looks, but also how efficiently (a.k.a. fast) it performs. By shifting performance testing to the left, teams can capture root causes of bottlenecks and congestion at a very early stage, making it less complex and less time-consuming to rewrite or fix functions.

5 TIPS TO HELP YOU START SHIFTING LEFT

1. GATHERING RELEVANT DATA

Performance testing happens by capturing snapshots of network requests and analyzing metrics. To recreate a scenario for every performance test case, you need live data. After capturing live data, it must be cleaned to remove any identifying or sensitive information. For example, email addresses, phone numbers, social media accounts and keys must all be removed or randomized before sending for performance testing. Next, you will need to ensure that the data is captured at the right times. The optimal time would be during median and peak usage that is free from network threats, DDoS and 500 errors!

2. SELECTING THE RIGHT TOOL

Front-end performance testing typically requires teams to add yet another tool to their testing stack. This causes teams to maintain multiple tools and test suites, thus increasing complexity and decreasing developer productivity. The Sauce Performance tool allows developers to use existing Selenium test scripts to capture both functional and performance point-in-time metrics and averages over time. With the tool, developers do not need to learn a new framework or use another tool to monitor performance. The Sauce Performance tool also displays results in an easy-to-consume visual format. This allows for teams to compare results over a time period, see how the web page gets constructed in real time, and segregate requests by type.

Sauce Performance

3. MOVE TESTING TO THE CLOUD

Performance testing on local machines is the best way to corrupt test results. Apart from automating test cases, entire test suites must be moved to the cloud. The easiest way to do this is to build a solid CI/CD process. Using the cloud for testing means you can save unlimited logs and videos of failed tests, pay for the service only when testing, customize network stats, and most importantly, replicate production scenarios. Testing in the cloud also gives you a chance to create statistical models from test results. Unlimited storage and Big Data technologies allow for patterns and root causes to be uncovered without the need for human intervention.

4. CREATE USER PERSONAS

Tools built today are meant for global consumption. This means your product needs to be tested against users with faulty network conditions, low CPU speeds, and outdated software. Performance testing allows you to move out of your product scenarios and consider environmental changes that can cause weird end results for users. The Sauce Performance tool can capture end-user metrics such as Time to First Meaningful Paint, Time to First Interactive, Page Weight, Speed Index, and much more. Comparing these metrics against varying environmental factors gives you an opportunity to view how your product performs for a variety of users.

5. MAKE IT A PART OF THE DEVELOPMENT PROCESS

You can set goals and results, and you can have organization buy-in, but if you don’t make performance testing a part of the development mindset, there’s only so far you can go. First, define SLAs at the component level in addition to the application level. This helps developers understand the impact of code modification to the individual components they are developing, allowing them to take responsibility of the entire development stack.

Teams must integrate performance testing into the build process. This way, basic tests can be executed by CI/CD upon code check-in, and performance tests can be run bi-weekly or monthly. Finally, developers must OWN the performance of their modules. It can not be offloaded to the testing team. Developers must be responsible for creating API endpoints and database tests that can be immediately leveraged in performance testing.

SUMMARY

End-users don’t want updates that repeatedly say “performance updates.” They crave features that scale from day zero. By shifting performance testing left, you ensure users get the best at the beginning, because the people building the application are continuously thinking about performance.


Swaathi Kakarla is the co-founder and CTO at Skcript She enjoys talking and writing about code efficiency, performance and startups. In her free time she finds solace in yoga, bicycling and contributing to open source. Swaathi is a regular contributor at Fixate IO.


Discussion

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar