What’s New? Result analysis tool and more!

November 21, 2017

This month we are adding a new Result Analysis tool that helps your team quickly pinpoint and analyze performance issues. Other new features include metric namespaces, charting improvements, metric detail improvements, and a new screen with a credit calculation breakdown. Read on for all the details.

Result Analysis

The new result analysis tool is available within the Results widget on the test results. It gives users an easy way to analyze and pinpoint performance issues that occurred during a load test.

Several types of analysis are supported: slow network requests, failed network requests, and slowest iterations of the test scenario. These are often a good starting point for analyzing what went wrong during a test.

Select a resource in the “By Resource” tab to filter the results to that resource only. The results can also be filtered by test runner region. The tool is available for existing test results as well as all future load tests.

Metric Namespace

All metrics now have a namespace. Standard metrics are available in the Testable namespace while user metrics default to the User namespace. This creates a nice separation between user and system metrics when building results charts and widgets.

User captured metrics can choose any namespace other than Testable. For example:



result().timing({ namespace: ‘My Namespace’, name: ‘appInitMs’, val: 100, units: ‘ms’ });


User captured metrics appear in the Summary widget by default for easy access without needing to customize the result view.

Charting Improvements

Charting metered metrics like memory, CPU, and network bandwidth now comes with several customization options.

The new options include:

Showing the peak usage on a single test runner instead of summed across all test runners by default. This makes it easier to see when the test runners are getting overloaded during a test run.
Option to hide the breakdown by test runner.

Metric Detail Improvements

Press the magnifying glass next to any timing or histogram metric to see a full breakdown of that metric.

For timing metrics this includes a count, all percentiles, mean, median, min, max, and standard deviation. For each aggregator a change from previous test runs is also shown if the test has been run more than once.

For histogram metrics this includes a count by bucket as well as each bucket’s percentage of the total. Each bucket and percent of total shows the change from previous test runs if the test has been run more than once.

Credit Calculation Breakdown

Tests run on the Testable platform incur a cost measured in credits depending on the size of the test for billing purposes. A breakdown of how the number of credits for each test was calculated is now available from the Account => Credits/Billing page. Double click any row in the Details table to view the breakdown.

In addition to the new features there were numerous infrastructure and library updates, bug fixes, performance improvements, and capacity increases during the last month.

Try out the new features and let us know if you have any feedback or concerns. Happy testing!

SHARE: