View Test Results
If you have previously gone through the steps to Execute a Test it is quite easy to pull up the results again. Make sure you first login and click on the test case name on the left. If you click the 'Last Executed' time on any test it will pull up the most recent result.
To see historic results click on the test configuration name and then select the run of interest from the page that loads.
Each section of the results page is broken down here.
The overview on the results page gives you the following information:
- View: Which view is being displayed. Default View is the default. Read the customize view guide for more details.
- Status: Status of this execution is one of Pending, Running, Processing Results, Completed, Cancelled by User, and Limit Breached.
- Load Parameters: Which test configuration was used and the associated load parameters. Click on the configuration name for more information like the execution history.
- Scenario: Which scenario was executed on each iteration of the test. Click through to view the scenario details.
- Action Menu: Various actions to perform related to this test like Repeat, Stop, Edit Configuration, Export to CSV, Share, and view customization options.
While the test is executing, this section shows you some "active" metrics. Active means executed during the most recent interval.
See a quick high level summary of how the results of this test execution compares with other recent runs of this same test configuration.
The logging widget shows all logging captured during execution, both system and scenario generated.
Logging can be viewed in tail mode to see the most recent log entries at the top.
Filters can be applied to narrow down the logging displayed by clicking the icon in the upper right of the widget.
Results can be viewed aggregated across all regions where the test executed or for one specific region.
Metrics and a chart that give you an overview of performance during the test. Each metric can be toggled on/off on the chart. See how the results during this test compare with up to 10 recent test executions as both a percentage change and a sparkline to spot trends. The set of metrics shown is customizable as well.
The images widget shows you the most recent screenshots captured while running your test. Tools like Webdriver.io, PhatnomJS, and SlimerJS support screenshot capture and these will appear here in a gallery layout. Click on any image to see it full sized.
Testable also calculates the difference between an image and an image with a similar name in the previous run of this test or a baseline image uploaded to the scenario. See the new configuration documentation for more details.
For Webdriver.io scenarios that utilize a test framework like Mocha this widget displays a summary of the test suite results including a listing of all suites, tests, average duration, errors, etc. For JMeter this will show assertion results.
The results are broken down by resource. A resource usually corresponds to a network resource like
GET https://www.google.com, a JMeter Label, or a Gatling scenario name + request label, but can also be anything captured with a custom metric. The columns in this grid are customizable.
If you select a resource in the grid, all charts and traces become specific to that resource only.
Analyze the results of individual network requests to get a deep dive into any issues. See the slowest results, failed results, or all results that were a part of a particular iteration of your scenario. You can also simply export all results to CSV to analyze further offline.
Traces allow you to view all the details of a particular connection made during test execution including metrics, data sent, and data received.
If you scenario captures output files (e.g. screenshots), a sampling of these will be available on the Files tab. To capture all output files from your test, update the setting by selection the Edit Configuration action => Advanced Options => Capture All Output. The method by which to capture output files is scenario type specific. See the documentation for the specific scenario type for more details (e.g. Selenium, PhantomJS, etc).
Any metric can be charted. Data points are captured every 10 seconds. Like everything else charts are customizable.
View details of memory, CPU, and bandwidth utilization across the test runners as well as broken down by test runner. Select a test runner in the grid to make the metrics and chart drill down to that test runner only.
The locations map shows where the test runners utilized to execute your test are located around the globe. If you hover over any region you can see the peak memory and CPU utilization across all the test runners within that region.