Testable screenshot diffs, AWS updates, and more

January 17, 2018

This month we introduce automatic screenshot diffs that enables easy anomaly detection. Other new features include updates to keep up with new AWS features, simplified test runner utilization, more virtual user metric, and better result sorting. Read on for all the details.

Screenshot Diffs

For scenario types that support screenshot capture (Webdriver.io, PhantomJS, SlimerJS) we now automatically compare any screenshots against previous test runs or a baseline image uploaded to the scenario.

By default any screenshots will be compared against a screenshot with the most similar name in the previous test run with any differences highlighted in red.

Click on the image name or change percentage to view details of the comparison. This can be done in the Images widget or in the Results => Files tab by double clicking any row that is an image.

The percent change in the image (pixel-by-pixel) is also captured as a metric called “Image Diff” that can be added to the results view or used to detect anomalies by setting success criteria and breaking points.

Screenshots can also be compared against a baseline image(s) uploaded to the scenario or turned off completely. To change the mode, a setting is available in your test configuration under Advanced Options.

AWS Related Updates

In keeping up with new AWS features we have several related changes.

T2 Unlimited

A T2 Unlimited instance can sustain high CPU performance for any period of time whenever required. All T2 family test runner instances that run within Testable’s account will automatically have this feature turned on.

For test runner instances within your own AWS account this option is available to toggle on/off as desired.

M5 Instances

The m5 instance family is now available for use on Testable in all regions where it is supported by AWS including N. Virginia, Oregon, and Ireland.

From the announcement: “Based on Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, the M5 instances are designed for highly demanding workloads and will deliver 14% better price/performance than the M4 instances on a per-core basis.”

Spot Fleets

If you chose to spin up spot test runner instances, they are now run as a spot fleet instead of the older style spot instance request. This allows us to get the best pricing across all subnets within the chosen VPC.

Simplified Subnet Option

For test runner instances within your own AWS account you no longer have to select a single subnet. Choose the “Any subnet” option and Testable will choose one of the subnets within the chosen VPC randomly. In the case of spot instances, the fleet will be configured to use the subnet with the lowest price.

Simplified Test Runner Utilization

In the past we only reported the absolute memory usage and CPU load average per test runner. This meant you had to remember how much memory/CPU your instance type had to determine whether or not a test runner was overloaded.

We now also provide and display a percent utilization metric for both CPU and Memory. If any test runner instance exceed 80% CPU or memory utilization you will also receive a warning message as part of the test summary.

These metrics are available as “CPU %” and “Memory %” and can be added to any result view, success criteria, and breaking point.

New Virtual User Metrics

Two new metrics are now available: Virtual Users Started and Virtual Users Finished. These report how many users were started/finished as part of your test without regard for whether they ran concurrently or not. This can be useful for detecting any errors starting virtual users.

Better Result Sorting

When viewing test results the rows in the “By Resources” breakdown are sorted alphabetically by resource name. You can now see the results sorted in the order they occurred in your test scenario. For example, with JMeter you can see the results in the order that the steps in your test plan were executed — similar to how JMeter displays results.

In addition to the new features various infrastructure and library updates, bug fixes, and performance improvements were applied during the last month. This includes a significant data layer redesign to enable us to scale up our test result processing capabilities.

Try out the new features and let us know if you have any feedback or concerns. Happy testing!

SHARE: