As a codebase changes we often want to verify that we have not negatively affected performance or that a performance improvement worked. In this blog post we discuss a few different strategies for doing this.
Load tests tend to be resource intensive (i.e. network bandwidth, cpu, memory, storage) since by definition we are attempting to stress our system. At the same time we want to be strategic and make sure the tests we run are carefully thought out and meeting our goals.
Define the Scenario(s)
Before you decide how often and how large of a load test to run we need to come up with the scenario that we want to execute at scale. Maybe that is purchasing an item on a website, running a search, hitting an API, etc.
Think about goals
Coming up with a scenario and then creating a load test that tries to break our system running that scenario under heavy load is a common strategy. While this kind of test is useful, a more refined strategy can meet a team’s goals more efficiently (i.e. cheaper) and provide a tighter feedback loop.
For example, to ensure that code changes do not increase an API’s response latencies we might consider running a load test with a small number (even 1) of concurrent clients before we run a full scale load test. If the latency spikes under this smaller test we can confidently say there is an issue without wasting lots of resources on a big load test. It is also the type of test that can be run often, even on each commit potentially.
Some suggested strategies
A recommended strategy for ensuring your product performs is to configure the following tests as part of your continuous integration workflow:
1. Small test on every commit. Simulate 1-5 users looking for spikes in latencies, unusual bandwidth usage, and throughput bottlenecks. Because this test should be small, schedule it via your CI tool to run on every commit after deploying the build to a small isolated environment.
2. Medium test once a day. Utilize times of low utilization if you have them and try targeting your QA environment with a medium sized load test each night for example. This can spot issues that simply weren’t noticeable under a smaller amount of load.
3. Large test before a release. Run a large load test that simulates your peak production volume (ideally plus a buffer). Doing this often can get expensive, but one time per release cycle is a great practice and might catch issues that did not occur under a smaller amount of load. Again target this test at a non-production environment, maybe QA or staging.
4. Production stability test once a day. This kind of test is useful for products that tend to get usage spikes at predictable times of day and very little usage the rest of the day. Take advantage of that quiet time to run a load test that validates that you can handle the expected max load. If this test passes you can feel confident you are ready for the next days peak workload.
Testable and Continuous Integration
Testable provides trigger URLs to allow for easy integration with any CI tool out there. Read this for more details
And that’s it! Please reach out if you have any questions or comments.