Breaking Points

Introduction

Each Breaking Point defines a threshold for acceptable performance. An example breaking point: Median Response Time >= 1000ms. If any breaking point is hit during test execution the test is automatically stopped.

Breaking points are configured per test configuration. Each configuration can have multiple breaking points.

Any metric can be used in building a breaking point including custom user defined metrics.

Breaking Point

Getting Started

While creating or editing a test configuration there is a Breaking Point section.

Add a breaking point by selecting a common template from the dropdown or Custom... to create your own.

Breaking Point Dropdown

General Notes

  1. Breaking point monitoring only starts after the ramp up period of your test. The default ramp up period is 1 minute which can be changed per test configuration.
  2. If your scenario is a script that creates custom metrics, the custom metrics will only be available in the dropdown while creating a breaking point AFTER the first execution of your test. You can still manually enter the metric name even before the first execution.

Breaking Point Types

There are 4 types of breaking points:

  1. Active value: Monitor the value of the specified metric during the most recent minute of your test execution.
  2. Value during this test: Monitor the value of the specified metric across the entire test execution.
  3. Change minute over minute: Monitor the percentage change in the specified metric from the previous minute to the most recent minute of your test execution.
  4. Change from previous test runs: Monitor the percentage change in the specified metric from a weighted average of up to 10 recent test executions to this current test execution. If there are less than 10 test executions for your configuration, all executions are included. If this is the first execution, no breaking point of this type will be triggered.

Metrics

Testable captures a bunch of standard metrics related to response time, concurrency, throughput, and success rate. All metrics are available for use in breaking points. See the metrics glossary for a more precise definition of each metric. Any custom metric is also available for monitoring.

Counters: For counter metrics (e.g. Success) you can monitor the metric itself or the metric as a percent of total requests made during the test or time interval.

Timings: For timing metrics (e.g. Response Time) several aggregations are available including percentiles, count, standard deviation, max, min, mean, and median. All these aggregators can be monitored in a breaking point.

Histograms: For histograms (e.g. Http Response Code) a metric is available for each bucket and the bucket as a percent of the total sum across the all histogram buckets. So for example, you can monitor the number of 200 HTTP Response Codes or the percent of all HTTP responses that had a 200 status.