Introducing assertions, automatic test runner sizing, and more

February 12, 2018

This month we introduce assertions to improve the depth of our integration with tools like JMeter, Gatling, and Webdriver.io. Other new features include automatic test runner sizing, JMeter and Gatling improvements, new chart customization options, a new peak requests/second metric, and support for AWS tags. Read on for all the details.

Assertions

For scenario types that support assertions or test frameworks, this information is now available as part of the test results. The functionality works with the following scenario types:



A new Assertions widget is displayed by default for Selenium, JMeter, and Gatling test results. If you have a custom view, you can add it via the action menu => Show Assertions.

To get the raw assertion results press the download button in the upper right corner of the widget. This will export a CSV that includes each assertion and any errors that occurred.

Assertion results are also displayed while smoke testing a scenario and in the Results Analysis tab of the results when analyzing slow test iterations.

Automatic Test Runner Sizing

A question we often get is “how many test runner instances do I need to run my test?” Our default recommendations are now built into the tool.

When adding an On Demand test runner to a test configuration we now set the number of instances (i.e. servers) automatically for you. This setting analyzes the number of concurrent users, the instance type, and the scenario type and makes an appropriate recommendation. You are free to override this as required.

Another important change is that instead of specifying the number of instances you now specify the number of concurrent users (or Gatling/JMeter instances) per EC2 instance. That way the number will automatically adjust up and down as you change the number of concurrent users or JMeter/Gatling instances.

JMeter and Gatling Improvements

Simpler Test Configuration
JMeter and Gatling are load generating tools themselves and offer a way to specify duration and ramp up of virtual users. Testable also supports these settings which resulted in confusion in the context of JMeter and Gatling. The ramp up and duration settings are no longer available in the test configuration for JMeter and Gatling tests. Testable will focus on distributing the JMeter/Gatling instances across the test runners and leave the ramp up and duration to the respective tools.

Virtual Users vs Concurrent Users
The virtual user metric counts how many users were simulated during your test whether or not they executed concurrently. The concurrent user metric counts the number of virtual users that ran concurrently. For JMeter and Gatling we now display both of those metrics so you can clearly see how many users you simulated and how many of those users were running concurrently.

JMeter Concurrent Users
Test results now report the true concurrent user count aggregated across active JMeter instances instead of assuming all threads were started immediately. This is available in the default JMeter test result view as “Concurrent Users” and also when creating a new chart as “JMeter Active Threads”. If your test plan uses a ramp up period this is especially useful as compared to the current metric.

Gatling Upgrade
Gatling 2.3.0 is now available for use and is the default version chosen for new scenarios. Gatling 2.2.4 continues to be available as well.

AWS Updates

Tags
For test runner instances running within your AWS account you can now add custom tags. A default set of custom tags can be specified under Account => Test Runners => [My AWS Account] => Configure.

M5 Instances In New Regions
The m5 instance family is now available for use on Testable in all regions where it is supported by AWS including N. Virginia, Oregon, N California, Ohio, Ireland, and Sydney.

Tags can also be specified per test configuration as required.

Chart Customization Options

The graph type (line vs column) and color are now customizable when configuring a chart in your test results.

And More…

A few other enhancements include:


  • A new metric that reports the peak requests/second during your test. Previously only the average was available as a metric.

  • Recent activity per scenario and test configuration. See an audit trail of changes for one specific scenario or test configuration. Previously this was only available across your entire organization or per test case.

  • Traces and files in the results now include the region name where they were captured.



In addition to the new features various infrastructure and library updates, bug fixes, and performance improvements were applied as usual. Try out the new features and let us know if you have any feedback or concerns. Happy testing!

SHARE: