Getting Started - Locust

Follow this guide for a basic example of running your Locust tests on the Testable platform. Our example project will test our sample REST API.

Start by signing up and creating a new test case using the New Test Case button on the dashboard or Test Case -> New... in the left side panel.

Enter the test case name (e.g. Locust Demo) and press Next.

Scenario

Select Locust file as the scenario type.

Locust Scenario

Let's use the following settings:

  1. Source: Upload Files + Create/Edit. See the detailed documentation for all the options to get our code onto Testable.
  2. Host: http://sample.testable.io (the --host argument to Locust)
  3. Concurrent Clients: 5 (the --clients argument to Locust)
  4. Hatch Rate: 1 (the --hatch-rate argument to Locust)
  5. # of Requests: 25 (the --num-request argument to Locust)

Locust Settings

We will use the example locustfile.py file that gets populated by default for but add one more API call. This is found in the Locust File section of the scenario definition.

from locust import HttpLocust, TaskSet, task, events

class MyTaskSet(TaskSet):
    @task(1)
    def ibm(self):
        self.client.get("/stocks/IBM")

    @task(1)
    def msft(self):
        self.client.get("/stocks/MSFT")

class MyLocust(HttpLocust):
    task_set = MyTaskSet
    min_wait = 1000
    max_wait = 2000

And that's it, we've now defined our scenario! To try it out before configuring a load test click the Smoke Test button in the upper right and watch Testable execute the scenario with 1 concurrent user. You should see all results including logging, network traces, etc appear in real time as the smoke test runs on one of our shared test runners.

Next, click on the Configuration tab or press the Next button at the bottom to move to the next step.

Configuration

Now that we have the scenario for our test case we need to define a few parameters before we can execute our test:

  1. Load Profile: Select Flat to ramp up to a constant number of Locust instances for the test.
  2. Locust Instances Per Region: Number of Locust instances to start per region. Each instance will generate load with the parameters defined in our scenario (5 concurrent users, 1 user/sec hatch rate, 25 total requests). So for example 2 Locust instances in this case would simulate 10 concurrent users.
  3. Test Runners: Choose the test runners that will run this test (e.g. on our public shared grid). Each test runner region will simulate the number of Locust instances defined above.

And that's it! Press Start Test and watch the results start to flow in. See the new configuration guide for full details of all configuration options.

For the sake of this example, let's use the following parameters:

Test Configuration

View Results

Once the test starts executing, Testable will distribute the work out to the selected test runners (e.g. Public Shared Grid in AWS N. Virginia).

Test Results

In each region, the test runners execute 2 separate Locust instances concurrently. The results will include traces, performance metrics, logging, breakdown by URL, analysis, comparison against previous test runs, and more.

Check out the Locust guide for more details on running your Locust tests on the Testable platform.

We also offer integration (Settings -> Integration) with third party tools like New Relic. If you enable integration you can do more in depth analytics on your results there as well.

That's it! Go ahead and try these same steps with your own scripts and feel free to contact us with any questions.