Monitor latency spikes with our API

March 23, 2016

We recently announced the launch of our new developer API. Check out the documentation for more details on all the functionality and how to get started.

In this post we focus on using the API in the following scenario:

  1. Start a test
  2. Wait for it to finish. If the median latency (technically time until the first byte is received) while running rises above 1 second (once we have at least 5 results) stop the test immediately.
  3. Save all results to a CSV.

Since this kind of thing is often scripted and run from a CI tool, we will demonstrate how to do this using a bash script.

Step 1: Generate an API Key

If you haven’t done so already, login to the Testable website and generate a new API Key under Settings -> API Keys.

Step 2: Create a Test Case

Let’s use the website to create a test case that we want to run. Note that this could also be done via the configuration and scenario APIs.

On the website press the New Test Case button. Let’s use the following details for each step:

  1. Target URL
    1. Name: API Demo
    2. URL:
  2. Scenario: HTTP GET
  3. Configuration
    1. Name: Monitor Latency Demo
    2. Concurrent Clients Per Region: 5
    3. Regions: AWS N. Virginia
    4. Duration: 2 minutes

Run the test once via the website to make sure it works as expected. On the results page click the Configuration link (text should read ‘Monitor Latency Demo’) and take a look at the page URL, for example: 193 is the configuration ID we will need in our bash script.

Step 3: Write the Monitoring Script

Make sure you have jq (useful Unix JSON parsing utility) installed or the script will not work. Our script looks as follows:


echo "[$(date)] Start a new execution for existing configuration"
execution_id=$(curl -H "X-Testable-Key:$API_KEY" -X POST --silent$CONFIGURATION_ID/executions | jq -r ".id")

# This next part keeps checking the median first received latencies once we have 5 results until the test is done.
# If it goes above 1 second it stops execution.

echo "[$(date)] Waiting for execution to complete (view online at$execution_id)"
echo "[$(date)] Will stop execution if the median latency (first byte received) goes above 1 second"
while : ; do
  echo -n "."
  sleep 5
  details=$(curl -H "X-Testable-Key:$API_KEY" --silent$execution_id)
  running=$(echo "$details" | jq -r ".running")
  if [[ $running = "true" ]]; then
    count=$(echo "$details" | jq -r ".summary.count")
    latency=$(echo "$details" | jq -r '.summary.metrics | .[] | select(.metricDef=="firstReceivedMs") | .metricValueMap.p50')
    if [[ $count > 5 && $latency > 1000 ]]; then
      echo "[$(date)] Median latency up to $latency ms, stopping execution"
      curl -H "X-Testable-Key:$API_KEY" -X PATCH --silent$execution_id/stop &>/dev/null
      echo "[$(date)] Stopped execution"
  [[ $(echo "$details" | jq -r ".completed") = "false" ]] || break

epoch=$(date +"%s")
echo "[$(date)] Storing CSV results at results-$epoch.csv"
curl -H "X-Testable-Key:$API_KEY" --silent$execution_id/results.csv > results-$epoch.csv

It relies on API_KEY and CONFIGURATION_ID environment variables being set prior to running it. The script has comments documenting each step so it should be pretty clear.

Hopefully this gives you an idea of the type of thing you can easily accomplish with our API. There are several more examples in our documentation under Documentation -> API -> Bash Examples.