Testable Update: AWS spot instances and better dashboards

This month we introduce the ability to generate load using AWS spot instances and improved dashboard sharing that enables you to customize and share different views for different audiences.

AWS Spot Instances

Last month we introduced AWS On Demand test runners. This month we are adding support for AWS spot instances saving you money on non-critical test workloads. Simply check the checkbox and specify your desired maximum bid price. The max bid defaults to the on demand price for the chosen instance type and region.

Look out for more test runners in the near future to support on demand instances on other platforms like the Microsoft Cloud. Click here for more detailed documentation on our full set of test runner options.

Better Dashboards

Dashboards are now shareable across all test cases in your account. Previously each test case had it’s own set of dashboards.

When sharing results publicly, the selected dashboard is associated with the share url now. This allows you to share the same results using several dashboards with a unique URL for each. For example, you might want to share one high level view with company executives and a different detailed view with your QA team.

Please check out all the new features and let us know if you have any thoughts or concerns. Happy testing!

Your AWS account or mine?

This month we introduce the ability to generate load using on demand, isolated AWS infrastructure using your AWS account or ours with more cloud providers coming soon.

AWS On Demand Load Generation

Until now all load was generated on shared infrastructure. Either shared across all accounts via our public grid or across all tests within your account using our on premises solution.

We now have a third option and also did a major internal overhaul to support new test runners in the future while we were at it.

From today you can generate load from isolated, on demand AWS instances that are spun up for each test separately. Use your own AWS account or ours.

When running a test you can select multiple test runners. This means you can run your scenario from multiple AWS regions, our shared public grid, and within your data center (or VPC) all at the same time.

Look out for more test runners in the near future to support on demand instances on other platforms like the Microsoft Cloud. Click here for more detailed documentation on the new features.

Customize Percentiles

When creating a new test configuration you can now specify exactly which percentiles you want to calculate for all timing metrics. This includes our own metrics (e.g. latency) as well as any custom ones you define in your test.

While we were at it, we also improved the speed at which we process and aggregate results by 10x using a new approach for calculating percentiles.

Please check out all the new features and let us know if you have any thoughts or concerns. Happy testing!

Testable Update: Trends, Memory/CPU tracking, and more

Plenty of new features, fixes, and enhancements this month including the launch of Trends, agent memory and CPU tracking, and more.

Trends

View sparklines and metric history in your test results to get a sense for how metrics like latency and throughput have changed across recent test executions. Drill down by clicking on the sparkline to see the exact history. This feature is available for all metrics including user defined custom ones.

Agent Memory/CPU tracking

See how much memory and CPU was required to execute your test. Both metrics are available on the default dashboard layout as a chart and in the balloon text on the agent location map. See this information across your entire test or per region.

Usability improvements

Numerous changes have been made to the dashboard, test case, and test results pages to make things simpler and clearer.

Streaming smoke testing

When writing a script or uploading a JMeter test plan it is useful to run a quick smoke test to make sure it works. This feature existed previously but did not work well for scripts where one iteration took longer than a minute. This has now been fixed and smoke test results stream into the browser in real time including full trace details.

Capacity increase, performance and reliability improvements

Numerous changes were made to further improve performance and reliability. We also increased our capacity to ensure we can scale to meet our clients load testing demand.

JMeter improvements

JMeter test plans that had long iteration time (> 1 minute) did not work well previously. This has now been remedied, and these tests should run smoothly.

Documentation updates

More documentation updates to improve the depth and coverage have been made during this release cycle.

Please check out all the new features and let us know if you have any thoughts or concerns. Happy testing!

Monitor latency spikes with our API

We recently announced the launch of our new developer API. Check out the documentation for more details on all the functionality and how to get started.

In this post we focus on using the API in the following scenario:

  1. Start a test
  2. Wait for it to finish. If the median latency (technically time until the first byte is received) while running rises above 1 second (once we have at least 5 results) stop the test immediately.
  3. Save all results to a CSV.

Since this kind of thing is often scripted and run from a CI tool, we will demonstrate how to do this using a bash script.

Step 1: Generate an API Key

If you haven’t done so already, login to the Testable website and generate a new API Key under Settings -> API Keys.

Step 2: Create a Test Case

Let’s use the website to create a test case that we want to run. Note that this could also be done via the configuration and scenario APIs.

On the website press the New Test Case button. Let’s use the following details for each step:

  1. Target URL
    1. Name: API Demo
    2. URL: http://sample.testable.io/stocks/IBM
  2. Scenario: HTTP GET http://sample.testable.io/stocks/IBM
  3. Configuration
    1. Name: Monitor Latency Demo
    2. Concurrent Clients Per Region: 5
    3. Regions: AWS N. Virginia
    4. Duration: 2 minutes

Run the test once via the website to make sure it works as expected. On the results page click the Configuration link (text should read ‘Monitor Latency Demo’) and take a look at the page URL, for example: https://a.testable.io/test/193. 193 is the configuration ID we will need in our bash script.

Step 3: Write the Monitoring Script

Make sure you have jq (useful Unix JSON parsing utility) installed or the script will not work. Our script looks as follows:

#!/bin/bash

echo "[$(date)] Start a new execution for existing configuration"
execution_id=$(curl -H "X-Testable-Key:$API_KEY" -X POST --silent     https://api.testable.io/test-confs/$CONFIGURATION_ID/executions | jq -r ".id")

# This next part keeps checking the median first received latencies once we have 5 results until the test is done.
# If it goes above 1 second it stops execution.

echo "[$(date)] Waiting for execution to complete (view online at     https://a.testable.io/results/$execution_id)"
echo "[$(date)] Will stop execution if the median latency (first byte received) goes above 1 second"
while : ; do
  echo -n "."
  sleep 5
  details=$(curl -H "X-Testable-Key:$API_KEY" --silent https://api.testable.io/executions/$execution_id)
  running=$(echo "$details" | jq -r ".running")
  if [[ $running = "true" ]]; then
    count=$(echo "$details" | jq -r ".summary.count")
    latency=$(echo "$details" | jq -r '.summary.metrics | .[] | select(.metricDef=="firstReceivedMs") | .metricValueMap.p50')
    if [[ $count > 5 && $latency > 1000 ]]; then
      echo "[$(date)] Median latency up to $latency ms, stopping execution"
      curl -H "X-Testable-Key:$API_KEY" -X PATCH --silent https://api.testable.io/executions/$execution_id/stop &>/dev/null
      echo "[$(date)] Stopped execution"
    fi
  fi
  [[ $(echo "$details" | jq -r ".completed") = "false" ]] || break
done

epoch=$(date +"%s")
echo "[$(date)] Storing CSV results at results-$epoch.csv"
curl -H "X-Testable-Key:$API_KEY" --silent https://api.testable.io/executions/$execution_id/results.csv > results-$epoch.csv

It relies on API_KEY and CONFIGURATION_ID environment variables being set prior to running it. The script has comments documenting each step so it should be pretty clear.

Hopefully this gives you an idea of the type of thing you can easily accomplish with our API. There are several more examples in our documentation under Documentation -> API -> Bash Examples.