Getting Started - Locust
Start by signing up and creating a new test case using the New Test Case button on the dashboard or Test Case -> New... in the left side panel.
Enter the test case name (e.g. Locust Demo) and press Next.
Select Locust file as the scenario type.
Let's use the following settings:
- Source: Upload Files + Create/Edit. See the detailed documentation for all the options to get our code onto Testable.
- Host: http://sample.testable.io (the
--hostargument to Locust)
- Concurrent Clients: 5 (the
--clientsargument to Locust)
- Hatch Rate: 1 (the
--hatch-rateargument to Locust)
- # of Requests: 25 (the
--num-requestargument to Locust)
We will use the example
locustfile.py file that gets populated by default for but add one more API call. This is found in the Locust File section of the scenario definition.
from locust import HttpLocust, TaskSet, task, events class MyTaskSet(TaskSet): @task(1) def ibm(self): self.client.get("/stocks/IBM") @task(1) def msft(self): self.client.get("/stocks/MSFT") class MyLocust(HttpLocust): task_set = MyTaskSet min_wait = 1000 max_wait = 2000
And that's it, we've now defined our scenario! To try it out before configuring a load test click the Smoke Test button in the upper right and watch Testable execute the scenario with 1 concurrent user. You should see all results including logging, network traces, etc appear in real time as the smoke test runs on one of our shared test runners.
Next, click on the Configuration tab or press the Next button at the bottom to move to the next step.
Now that we have the scenario for our test case we need to define a few parameters before we can execute our test:
- Load Profile: Select Flat to ramp up to a constant number of Locust instances for the test.
- Locust Instances Per Region: Number of Locust instances to start per region. Each instance will generate load with the parameters defined in our scenario (5 concurrent users, 1 user/sec hatch rate, 25 total requests). So for example 2 Locust instances in this case would simulate 10 concurrent users.
- Test Runners: Choose the test runners that will run this test (e.g. on our public shared grid). Each test runner region will simulate the number of Locust instances defined above.
And that's it! Press Start Test and watch the results start to flow in. See the new configuration guide for full details of all configuration options.
For the sake of this example, let's use the following parameters:
Once the test starts executing, Testable will distribute the work out to the selected test runners (e.g. Public Shared Grid in
AWS N. Virginia).
In each region, the test runners execute 2 separate Locust instances concurrently. The results will include traces, performance metrics, logging, breakdown by URL, analysis, comparison against previous test runs, and more.
Check out the Locust guide for more details on running your Locust tests on the Testable platform.
We also offer integration (Settings -> Integration) with third party tools like New Relic. If you enable integration you can do more in depth analytics on your results there as well.
That's it! Go ahead and try these same steps with your own scripts and feel free to contact us with any questions.