Introduction to Performance Testing

September 21, 2018

This post is focused on how we can use Testable to performance test a echo service, capture the echo latency, and analyze the results.

More details on the echo service can be found here.

Step 1: Create a Test Case

Make sure you sign up for a Testable account first. After logging in, click the New Test Case button, give it a name, and specify the URL ( in our example).

Step 2: Write Test Script

Testable scripts are simply Javascript that executes in a sandboxed Node.JS environment. Once you finish Step 1, click Next and select Script as the scenario type. You can get started by using the template dropdown and selecting WebSocket -> connection, but in this case let’s use the following code instead:

var connectedAt = 0;
const socket = socketio('');
socket.on('connect', function(){
	connectedAt =;
	socket.emit('event', 'This is a test');
socket.on('event', function(data){
  results().timing({ namespace: 'User', name: 'echoLatencyMs', val: - connectedAt, units: 'ms' });

This code uses the library to:

  1. Connect to the server
  2. Send a message
  3. Time how long it takes to get the echo response
  4. Close the connection

Test out your script by pressing the Smoke Test button in the upper right which executes it one time on a shared Testable agent. Any captured metrics and logging will appear in the Smoke Test Output tab. Logging at the TRACE level allows us to capture logging for smoke testing only and ignoring it when we execute our load test at scale.

Example Smoke Test Output

Step 3: Configure a Load Test

Click Next to move onto the Configuration step. We now define exactly how to execute the scenario we defined in Step 2.

  • 10 virtual users in each region.
  • 1 minute to ramp up the virtual users.
  • 1 minute duration.
  • Two regions (AWS N Virginia and AWS Oregon).

Click the Start Test button and your load test is off and running!

Step 4: View the Results

By now you should see results flowing in as the test executes. The default dashboard will show a summary, results grid, and graphs of the system captured metrics.

After the test finishes you get a nice summary of the results across the top as well.

The echo latency metric can also be added to the overview, results grid, or to a new chart. Read this for more information on customizing the results view.

And that’s it! We’ve setup a test case, captured an example custom metric, run it at scale, and analyzed the results.