Testable March Feature Roundup

March 13, 2017

This month brings several new features and improvements. Read on for more details.

Create/Edit/View/Download All Files



Until now the only way to create, edit, or view a file that is part of a Testable scenario was to upload or download it. This can be pretty inconvenient especially if multiple team members are collaborating on a test.

Click on the name of a file to edit or view it within the browser or the Create File button to add a new file to the test. Only text files can be edited in the browser.



A Download All button was also added which zips up all files and downloads them. This is available on both a test scenario and test result that captures screenshots or other files.

Improved Test Runner Info

Allocating the right number of test runner instances to successfully execute your test case can be a tricky process that involves some trial and error.

Until now we only reported the total aggregated memory and CPU used by all test runners. This made it difficult to figure out if any specific test runner was overloaded.

With this release we introduce several new features:

1. Test Runner network usage

Tests run using on demand infrastructure now report the network interface utilization (in bytes/sec). Load tests are often constrained by network saturation and not memory or CPU so this metric can be useful in determining whether or not a test runner is overloaded. This metric is not available for Shared grid tests.

2. Per Test Runner reporting

In addition to the aggregated test runner information we now provide a table with a row per test runner instance. Click on any row in the table to drill down into the test runner. The peak totals and chart will update to that test runner only.



If you are using the default dashboard for viewing test results these new details will automatically appear in the section that previously displayed the Memory and CPU chart.

If you have a custom dashboard, first run your test again so that these new details are captured. Then click on the configure icon in the upper right of the widget and select the new “Metered” metrics (memory, cpu, test runner bandwidth) to display the new details. Save your dashboard to persist the change.



Look out for more useful changes on this front in the coming months. Our goal is to automatically help you size the infrastructure correctly without manual intervention. This feature is a necessary step on that roadmap.

Test Progress

The test progress widget has had several improvements applied. There were previously several points at which it appeared to get “stuck” on larger tests where we now display better progress information. The steps of a test are now as follows:

Currently Running Overall Load

On the home page you could previously see a table with all currently running tests and their progress. We have now added to that overall load metrics that give you an idea of the total load being generated across all currently running tests.

Response Code Histogram

Currently the “httpResponseCode” histogram provides a breakdown of responses by HTTP response code. This metric has a couple of shortcomings:

1. It does not include connection errors where no HTTP response code is received.

2. It is not available for non-HTTP tests like websockets, socketio, etc.



We have now added a new “responseCode” histogram which is available for all test types and includes connections errors. It can be found in the histogram list when configuring a chart. Go ahead and add it to your dashboard!

Export Metrics to CSV or Clipboard

In addition to exporting the raw results you can now export the aggregated metrics by resource. Find the export buttons in the upper right of the table.

Create New Test Configuration Without Starting It

Previously when creating a new test configuration there was no way to save it without also starting the first test execution. You will now find two buttons. This is useful when you want to setup test configurations at an earlier time than the first time you run the tests.

Override Resource Name

Testable aggregates metric by resource (a label for a network accessed resource). The system generated metrics were associated with resource names that could not be changed unless you were using JMeter. By default the resource names include the base URL plus up to two path parts.

The resource names can now be overridden or used in a Node.js script as follows:

var results = require(‘testable-utils’).results;
// returns ‘GET https://www.google.com'
results.toResourceName(‘https://www.google.com', ‘GET’);
// use full URLs as resource labels
results.toResourceName = function(url, method) {
return method + ‘ ‘ + url;
}

Attach Custom Metrics to Network Calls

Our custom metrics API allows you to capture your own custom metrics with optional resource name and URL to be used for aggregation.

However it is sometimes useful to instead attach or modify metrics associated with a network call so that it gets aggregated on the exact same dimensions as all other metrics Testable captures for that network call.

Lets use a concrete example to make this clearer. In this example we want to mark HTTP requests that return a success (e.g. 200) status code but where the body is “bad” as a failure.

var results = require(‘testable-utils’).results;
var request = require(‘request’);
request.get(‘http://sample.testable.io/stocks/IBM', function (error, response, body) {
if (body === ‘bad’) {
results.current.counter(‘success’, -1);
results.current.setTraceStatus(‘Custom Error’);
}
});


Within a network library event handler (e.g. request, http, ws, socketio, etc), the “results.current” variable gives you access to the current Testable result. The above example simply cancels out the automatic success increment and also changes the trace status to “Custom Error”. Note that “results.current” will be undefined if outside of a network library event handler. See our documentation for more details.

Smoke Test Only Logging

A new “trace” logging level has been introduced that only gets captured when smoke testing your script. This allows you to leave development time logging in place when executing your test at scale without an overwhelming amount of logging which can often saturate the network bandwidth of the test runner if you are not careful.

logger.trace(‘something I only want when smoke testing’);

There is Always More…

In addition to all the above features and enhancements here is a list of some of the smaller changes:


  • SlimerJS 0.10.3 upgrade

  • PhantomJS 2.5.0-beta added

  • Double the capacity of the Testable central infrastructure

  • Performance and stability improvements

  • Edit URL on a Recording to change the targeted URL after a Recording is already started


Try out the new features and let us know if you have any feedback or concerns. Happy testing!

SHARE: