Introduction to Socket.io Performance Testing

This post is focused on how we can use Testable to performance test a Socket.io echo service, capture the echo latency, and analyze the results.

More details on the Socket.io echo service can be found here.

Step 1: Create a Test Case

Make sure you sign up for a Testable account first. After logging in, click the New Test Case button, give it a name, and specify the URL (http://sample.testable.io:5811 in our example).

Step 2: Write Test Script

Testable scripts are simply Javascript that executes in a sandboxed Node.JS environment. Once you finish Step 1, click Next and select Script as the scenario type. You can get started by using the template dropdown and selecting WebSocket -> Socket.io connection, but in this case let’s use the following code instead:

var connectedAt = 0;
const socket = socketio('http://sample.testable.io:5811');
socket.on('connect', function(){
	log.trace('connected');
	connectedAt = Date.now();
	socket.emit('event', 'This is a test');
});
socket.on('event', function(data){
	log.trace(data);
  results().timing({ namespace: 'User', name: 'echoLatencyMs', val: Date.now() - connectedAt, units: 'ms' });
	socket.close();
});

This code uses the socket.io-client library to:

  1. Connect to the server
  2. Send a message
  3. Time how long it takes to get the echo response
  4. Close the connection

Test out your script by pressing the Smoke Test button in the upper right which executes it one time on a shared Testable agent. Any captured metrics and logging will appear in the Smoke Test Output tab. Logging at the TRACE level allows us to capture logging for smoke testing only and ignoring it when we execute our load test at scale.

Example Smoke Test Output


Step 3: Configure a Load Test

Click Next to move onto the Configuration step. We now define exactly how to execute the scenario we defined in Step 2.

  • 10 virtual users in each region.
  • 1 minute to ramp up the virtual users.
  • 1 minute duration.
  • Two regions (AWS N Virginia and AWS Oregon).

Click the Start Test button and your Socket.io load test is off and running!

Step 4: View the Results

By now you should see results flowing in as the test executes. The default dashboard will show a summary, results grid, and graphs of the system captured metrics.

After the test finishes you get a nice summary of the results across the top as well.

The echo latency metric can also be added to the overview, results grid, or to a new chart. Read this for more information on customizing the results view.

And that’s it! We’ve setup a Socket.io test case, captured an example custom metric, run it at scale, and analyzed the results.

WebSocket Performance Testing in 4 Simple Steps

This post is focused on how we can use Testable to performance test a WebSocket server, capture some useful custom metrics, and analyze the results.

We will use an example websocket server that supports subscribing to a stock symbol and receiving streaming summy price updates (dummy) once a second. More details on this service can be found here.

Step 1: Create a Test Case

Make sure you sign up for a Testable account first. After logging in, click the New Test Case button and give it a name (Websocket Demo).

Step 2: Write Test Script

Choose Node.js Script as the scenario type. Use the template dropdown and select Custom Metrics -> Measure subscription to first tick latency on a websocket.

You should see the following Javascript code inserted into the Code tab:

var subscribeSentAt = 0;
const ws = new WebSocket("ws://sample.testable.io/streaming/websocket");

ws.on('open', function open() {
subscribeSentAt = moment().valueOf();
ws.send('{ "subscribe": "IBM" }');
});

ws.on('message', function(data, flags) {
results('IBM').timing({ namespace: 'User', name: 'sub2tick', val: moment().valueOf() - subscribeSentAt, units: 'ms' });
ws.close();
});

If you are familiar with Node.js and the ws module then this code should look pretty clear already. If not let’s go through each block.

var ws = new WebSocket("ws://sample.testable.io/streaming/websocket");

This line opens a websocket connection to the sample service.

ws.on('open', function open() {
    subscribeSentAt = moment().valueOf();
    ws.send('{ "subscribe": "IBM" }');
});

Listen for the websocket open event. Once received, capture the current timestamp (ms) and send a subscription request for IBM.

ws.on('message', function(data, flags) {
    results('IBM').timing({ namespace: 'User', name: 'sub2tick', val: moment().valueOf() - subscribeSentAt, units: 'ms' });
    ws.close(); 
}); 

There are a few things happening here:

  1. Subscribe to the message event on the websocket which gets fired every time the server sends a message.
  2. On receiving a message (the price quote), measure the latency since opening the websocket. Capture this latency as a custom metric called sub2tick, grouped by symbol (IBM), which can be aggregated and analyzed when our test runs. If grouping by symbol was not useful, simply replace results('IBM') with results().
  3. Close the websocket. Otherwise price ticks will continue indefinitely until the test times out.

To see what the price update message looks like, add a log.trace(data) line to the above code for testing. The TRACE logging level will only be available when you smoke test your script and not when running it as a load test.

This code now defines the Scenario to execute at scale.

Test out your script by pressing the Smoke Test button in the upper right. This executes it one time on a shared Testable test runner. Any captured metrics and logging will appear in the Smoke Test Output tab.

Example Smoke Test Output

Notice that Testable captures a bunch of metrics automatically in addition to the custom metric we added in our script.

Step 3: Configure a Load Test

Click Next to move onto the Configuration step. We now define exactly how to execute the scenario we defined in Step 2.

  • 10 virtual users in each region.
  • 1 minute to ramp up the virtual users.
  • 1 minute duration.
  • Two regions (AWS N Virginia and AWS Oregon).

Click the Start Test button and your test is off and running! Congratulations you have officially created and run a load test. Now let’s look at analyzing the results.

Step 4: View the Results

By now you should see results flowing in as the test executes. The default dashboard will show logging, summary, results grid, and graphs of the system captured metrics.

After the test finishes you get a nice summary of the results across the top as well.

The “sub2tick” metric can also be added to the overview, results grid, or to a new chart. Read this for more information on customizing the results view.

And that’s it! We’ve setup a test case, captured custom metrics, run it at scale, and analyzed the results.

If you don’t want to write the script yourself you can also record a websocket scenario as well.

Testable Update: AWS spot instances and better dashboards

This month we introduce the ability to generate load using AWS spot instances and improved dashboard sharing that enables you to customize and share different views for different audiences.

AWS Spot Instances

Last month we introduced AWS On Demand test runners. This month we are adding support for AWS spot instances saving you money on non-critical test workloads. Simply check the checkbox and specify your desired maximum bid price. The max bid defaults to the on demand price for the chosen instance type and region.

Look out for more test runners in the near future to support on demand instances on other platforms like the Microsoft Cloud. Click here for more detailed documentation on our full set of test runner options.

Better Dashboards

Dashboards are now shareable across all test cases in your account. Previously each test case had it’s own set of dashboards.

When sharing results publicly, the selected dashboard is associated with the share url now. This allows you to share the same results using several dashboards with a unique URL for each. For example, you might want to share one high level view with company executives and a different detailed view with your QA team.

Please check out all the new features and let us know if you have any thoughts or concerns. Happy testing!

Your AWS account or mine?

This month we introduce the ability to generate load using on demand, isolated AWS infrastructure using your AWS account or ours with more cloud providers coming soon.

AWS On Demand Load Generation

Until now all load was generated on shared infrastructure. Either shared across all accounts via our public grid or across all tests within your account using our on premises solution.

We now have a third option and also did a major internal overhaul to support new test runners in the future while we were at it.

From today you can generate load from isolated, on demand AWS instances that are spun up for each test separately. Use your own AWS account or ours.

When running a test you can select multiple test runners. This means you can run your scenario from multiple AWS regions, our shared public grid, and within your data center (or VPC) all at the same time.

Look out for more test runners in the near future to support on demand instances on other platforms like the Microsoft Cloud. Click here for more detailed documentation on the new features.

Customize Percentiles

When creating a new test configuration you can now specify exactly which percentiles you want to calculate for all timing metrics. This includes our own metrics (e.g. latency) as well as any custom ones you define in your test.

While we were at it, we also improved the speed at which we process and aggregate results by 10x using a new approach for calculating percentiles.

Please check out all the new features and let us know if you have any thoughts or concerns. Happy testing!