OpenFin application load testing

Typically if you want to load test an OpenFin application you would need reproduce the user journeys through your application using a tool like JMeter, Gatling, or Locust. This can require significant test engineering time and maintenance as your application evolves.

Testable now allows you to take your Selenium driven OpenFin functional tests and run them with multiple concurrent virtual users across many instances and locations as a load test. An example of such a functional-test-turned-load-test can be found at openfin-wdio-testable-example.

In this post we will walk you through step by step how this test works and how to get it running on Testable.

The basic approach

The basic idea here relies on the fact that an OpenFin application is simply a wrapper around a Chromium application. This means we can use Selenium + chromedriver to launch our OpenFin application and drive it’s behavior using any Selenium bindings. For our example we choose Webdriver.io as our bindings of choice.

The Webdriver.io project

Our Webdriver.io project follows a familiar pattern with a couple of small adjustments to get it to launch our OpenFin application. Let’s review each file in the project, it’s purpose, and any special settings related to OpenFin testing.

package.json

Webdriver.io runs on the Node.js runtime so we need to provide a package.json file with all dependencies required to run our test. No special dependencies are required to run an OpenFin application.

{
  "name": "openfin-wdio-testable-example",
  "version": "1.0.0",
  "private": true,
  "dependencies": {
    "chai": "4.2.0",
    "lodash": "4.17.14",
    "webdriverio": "5.10.0",
    "@wdio/cli": "5.10.0",
    "@wdio/dot-reporter": "5.9.3",
    "@wdio/concise-reporter": "^5.9.3",
    "@wdio/local-runner": "5.10.0",
    "@wdio/sync": "5.10.0",
    "@wdio/mocha-framework": "5.9.4"
  }
}
wdio.conf.js

In our Webdriver.io test runner configuration we need to set it up so that Chromedriver runs OpenFin instead of the Chrome browser. Luckily chromeOptions -> binary serves this exact purpose. On Testable, the config file will be found at process.env.CONFIG_URL. When running locally we need to make sure to point it at the right config URL depending on operating system:

const isWinOS = process.platform === 'win32';
const launchTarget = isWinOS ? 'RunOpenFin.bat' : `${process.cwd()}/RunOpenFin.sh`;
const CONFIG_URL = process.env.CONFIG_URL || (isWinOS ? `${process.cwd()}\\app_sample.json` : `${process.cwd()}/app_sample.json`);

exports.config = {
  specs : [
    'test.js'
  ],
  capabilities : [
    {
      browserName   : 'chrome',
      chromeOptions : {
        extensions : [],
        binary     : launchTarget,
        args       : [
          `--config=${CONFIG_URL}`
        ]
      }
    }
  ],
  host           : 'localhost',
  port           : 9515,
  reporters      : ['dot', 'concise'],
  path           : '/',
  logLevel       : 'error',
  coloredLogs    : true,
  framework      : 'mocha',
  waitforTimeout : 20000,
  mochaOpts      : {
    ui        : 'bdd',
    timeout   : 500000
  }
};

Notice that when Chromedriver launches the “browser” it will also include the URL to the OpenFin application configuration.

RunOpenFin.bat

The Chrome launch file for Windows is in charge of launching our OpenFin runtime. Extracts the config URL and remote debug port from the command line arguments passed from Chromedriver:

:loop  
 IF "%1"=="" GOTO cont  
 SET opt=%1
 IF "%opt%" == "--remote-debugging-port" (
    SET debuggingPort=%2
 )
 IF "%opt%" == "--config" (
    SET startupURL=%2
 )
 SHIFT & GOTO loop

It also sets the path to the OpenFin runtime binary on the Testable test runner:

SET openfinLocation=C:\Users\Administrator\AppData\Local\OpenFin

Testable also launches a MITM proxy (browsermob-proxy) to capture metrics on all network requests made by your OpenFin application. This proxy is available as the testable_proxy environment variable. To get this to work correctly, the batch script needs to pass this as a runtime argument when launching OpenFin. It should also pass “–ignore-certificate-errors” since the MITM proxy will be using a certificate that OpenFin will not recognize:

%openfinLocation%\OpenFinRVM.exe --config=%startupURL% --runtime-arguments="--proxy-server=%testable_proxy% --remote-debugging-port=%debuggingPort% –-ignore-certificate-errors"
app.json

A standard OpenFin application config file is required. The only special requirement is to pass the –ignore-certificate-errors runtime argument if the proxy is set for capturing metrics on all network requests.

"runtime": {
    "arguments": "--enable-chromium-window-alert --allow-http-screen-capture --ignore-certificate-errors",
    "version": "stable"
}
test.js

The test specification that uses Webdriver.io commands to drive the OpenFin application. See the test.js code for more details. There are a couple of special steps to switch into an OpenFin application window and to wait for the OpenFin runtime to initialize.

Turning it into a Testable load test

Start by signing up and creating a new test case using the New Test Case button on the dashboard.

Enter the test case name (e.g. OpenFin Demo) and press Next.

Scenario

Select Selenium as the scenario type.

Selenium Scenario

Let’s use the following settings:

1. Bindings: Webdriver.io.
2. Source: Version Control.
3. Repository: Add New… use Name = openfin-wdio-testable-example, Url = https://github.com/testable/openfin-wdio-testable-example.git.
4. Branch: master for Webdriver.io v5, wdio-v4 for Webdriver.io v4.
5. Webdriver.io Conf File: wdio.conf.js
6. Runtime Requirements: Windows. OpenFin is built to run on Windows. Testable’s test runner is available for Windows. You can also run OpenFin on Linux and our demo repository supports that as well.
7. Browser Binary: OpenFin.
8. OpenFin Config URL: app_sample.json.

General Settings

Next, click on the Configuration tab or press the Next button at the bottom to move to the next step of configuring your test.

Configuration

Now that we have the scenario for our test, we need to configure our load parameters including:

1. Concurrent Users Per Location: Number of users that will execute in parallel in each region selected. Each user will execute the scenario (i.e. launch OpenFin and perform the test steps).
2. Test Length: Select Iterations to have each client execute the scenario a set number of times regardless of how long it takes. Choose Duration if you want each client to continue executing the scenario for a set amount of time (in minutes).
3. Location(s): Choose the location in which to run your test and the [test runner source](/guides/test-runners.html) that indicates which test runners to use in that location to run the load test. For OpenFin tests, you will need to select either AWS On Demand – Testable Account or your own AWS account.
4. Test Runners: For OpenFin Windows tests, only 1 virtual user will run on each EC2 instance. This is due to a limitation with the OpenFin runtime which does not allow us to simulate multiple virtual users on the same EC2 instance.

And that’s it! Press Start Test and watch the results start to flow in.

For the sake of this example, let’s use the following parameters:

Test Configuration

View Results

Once the test starts executing, Testable will distribute the work out to the EC2 instances that Testable spins up.

Test Results

In each region, the Testable runners runs the OpenFin test with 5 virtual users across 5 EC2 instances, each running the test 3 times with a 10 second pause between test iterations.

The results will include screenshots, assertions, traces, performance metrics, logging, breakdown by URL, analysis, comparison against previous test runs, and more.

That’s it! Go ahead and try these same steps with your own scripts and feel free to
contact us with any questions.

Load testing Kafka with Node.js and Testable

Apache Kafka is a distributed streaming platform that allows applications to publish and subscribe to streams of records in a fault-tolerant and durable way. In this blog we will look at how we can use Node.js along with Testable to load test a Kafka cluster and produce actionable results that help us understand how well our cluster scales and how many nodes it will need to handle the expected traffic.

A bit of important background first (feel free to read more on the Apache Kafka site). Kafka records are grouped into topics with each record consisting of a key, value, and timestamp. Each topic has 1 or more immutable partitions chosen based on the record key. Each partition is replicated for fault-tolerance.

Before we load test our Kafka cluster we need to decide what traffic to simulate and how to go about measuring success, performance degradation, etc.

All of the code and instructions to run the load test discussed in this blog can be found on GitHub (kafka-example-loadtest). The project relies heavily on the kafka-node module for communication with the Kafka cluster.

Let’s get load testing!

What traffic to simulate

We need to first decide on a few things in terms of what each virtual user in our test is going to do. A virtual user in this case represents an application that connects to the Kafka cluster.

  • Topics: The number of topics to publish to across all the virtual users and what topic names to use. If the topics are being created on the fly you need to also decide on the number of partitions and replication factor.
  • Producers per topic: How many producers will publish records to each topic.
  • Consumers per topic: How many consumers will subscribe to each topic.
  • Publish frequency: How often to publish records to each topic.
  • Message size: How many bytes of data should be in the value be for each record and whether it should be randomly generated, hard-coded, or chosen from a predefined list of valid messages.

Metrics to determine success

While running the load test you should certainly be monitoring the health of the servers in the Kafka cluster for CPU, memory, network bandwidth, etc.

From our load test we will also monitor a variety of standard metrics like throughput and success rate but measuring latency for Kafka is more difficult than a typical API or website test. This is because the Kafka client library opens a long lived TCP connection for communication with the Kafka cluster. So the “time to first byte received” (the typical definition of latency) or connection close time are both fairly meaningless in terms of measuring Kafka performance.

Instead we will capture several custom metrics in our test that give us a more useful view of the performance of the Kafka cluster:

  • End-to-end (E2E) Latency: We want to measure the amount of time it takes from the time a message is published to a Kafka topic until the time it is consumed by the consumer. As we scale up the number of virtual users this number should be steady until the Kafka cluster gets overwhelmed.
  • Messages Produced: The number of messages produced overall and per topic. This number should scale up linearly as we scale up the number of virtual users. Once that no longer holds we know that we have reached a throughput issue with the Kafka cluster.
  • Messages Consumed: The number of messages consumed overall and per topic across all consumers. This number should also scale up linearly as we scale up the number of virtual users.

How does the script work?

Let’s break down the test.js file that each virtual user will execute during our load test with Node.js. We will skip over the initialization and setup part of the script which is straightforward. Each virtual user will produce and consume from a certain number of topics depending on the load test configuration (discussed in the next section).

Step 1: Create the topics

Our script auto-creates the topics if they do not exist using the following naming: topic.0, topic.1, etc. We hard code the number of partitions and replicationFactor both to 1. Feel free to change as required.

const client = new kafka.KafkaClient({
    kafkaHost: kafkaHost
});
const admin = new kafka.Admin(client);
admin.createTopics(topicsToCreate.map(t => ({ topic: t, partitions: 1, replicationFactor: 1 })), (err, res) => {
    // ... test continues here
});

Step 2: Setup the producer connection and start publishing messages with the current timestamp as the key

We wait for the ready event from the producer before we start publishing. The key for each message is the current timestamp. This allows us to easily measure end-to-end latency in the consumer. We randomly generate each message to be between minMsgSize and maxMsgSize. We keep producing messages until myDuration milliseconds passes at which point we close and cleanup all connections after waiting 5 seconds to let all the consumers receive every record.

We use the testable-utils custom metrics API to increment a counter on each message published, grouped by topic.

const producer = new kafka.Producer(new kafka.KafkaClient({
  kafkaHost: kafkaHost
}));

producer.on('ready', function(err) {
  if (err)
    console.error('Error waiting for producer to be ready', err)
  else {
    log.info('Producer ready');
    produce();
  }
});

function produce() {
  for (var j = 0; j < producersPerUser; j++) {
    const topic = producerTopics[j % producerTopics.length];
    const msgLength = Math.ceil((maxMsgSize - minMsgSize) * Math.random()) + minMsgSize;
    results(topic).counter({ namespace: 'User', name: 'Msgs Produced', val: 1, units: 'msgs' });
    producer.send([ {
      topic: topic,
      messages: [ randomstring.generate(msgLength) ],
      key: '' + Date.now()
    } ], function(err) {
      if (err) {
        log.error(`Error occurred`, err);
      }
    });
  }
  if (Date.now() - startedAt < myDuration)
    setTimeout(produce, msgFrequencyMsPerTopic);
  else
    setTimeout(complete, 5000);
}

Step 3: Consume messages and capture the E2E latency

Each virtual user will form its own consumer group for consuming messages. To get a unique name we will utilize the test run ID (i.e. execution) and the unique global client index assigned to each virtual user at test runtime. These will both default to 0 when run locally.

We use the message key to figure out the end-to-end latency from producing the message until it is consumed and record that as a custom timing metric. And just like we recorded a count of messages produced, we also capture a counter for messages consumed, grouped by topic.

const consumer = new kafka.ConsumerGroup({
  groupId: execution + '.user.' + info.globalClientIndex,
  kafkaHost: kafkaHost,
  autoCommit: true,
  autoCommitIntervalMs: 1000,
}, consumerTopics);

consumer.on('message', function(message) {
  results(message.topic).counter({ namespace: 'User', name: 'Msgs Consumed', val: 1, units: 'msgs' });
  results(message.topic).timing({ namespace: 'User', name: 'E2E Latency', val: Date.now() - Number(message.key), units: 'ms' });
});

Step 4: Close all connections

Once the desired duration has passed we need to cleanup all connections to end the test:

producer.close(function() {});
consumer.close(function() {});
client.close(function() {});

Running the test locally

The kafka-example-loadtest project comes with a script to easily run the test locally. You simply need Node.js 8.x+ installed.

./run-local.sh [kafka-url]

This will run one virtual user and print out any custom metrics to the command line.

Running the load test on Testable

Run the test at scale on the Testable platform. You need to sign up for a free account and create an API key. The test parameters in the start script were chosen to fit within the limits of our free account so that anyone can run this test.

The test will run for 15 minutes starting at 5 virtual users and stepping up to 50 virtual users by the end. It will publish to 10 topics with 5 producers and 5 producers per topic. Each publisher will publish every 500ms a random message between 100-500 bytes. The load will be generated from 1 t2.large instance in AWS N Virginia. The test results will use a custom view by default that features our custom metrics (E2E Latency, Msgs Produced, Msgs Consumed).

To run the test:

export TESTABLE_KEY=xxx
./run-testable.sh [kafka-url]

The script will output a link where you can view the results in real-time. The results will include the widgets shown below.

The run-testable.sh script can be customized with different parameters as required. All the API parameters can be found here.

curl -s -F "code=@test.js" \
  -F "start_concurrent_users_per_region=5" \
  -F "step_per_region=5" \
  -F "concurrent_users_per_region=50" \
  -F "duration_mins=15" \
  -F "params[kafkaHost]=$1" \
  -F "params[topics]=10" \
  -F "params[producersPerTopic]=5" \
  -F "params[consumersPerTopic]=5" \
  -F "params[msgFrequencyMsPerTopic]=500" \
  -F "params[minMsgSize]=100" \
  -F "params[maxMsgSize]=500" \
  -F "conf_testrunners[0].regions[0].name=us-east-1" \
  -F "conf_testrunners[0].regions[0].instance_type=t2.large" \
  -F "conf_testrunners[0].regions[0].instances=1" \
  -F "testcase_name=Kafka Load Test" \
  -F "conf_name=5-50 Concurrents 1 Instance" \
  -F "scenario_name=Node.js Script" \
  -F "view=@kafka-view.json" \
  https://api.testable.io/start?key=$TESTABLE_KEY

And that’s it! Play around with all the parameters to adjust the load profile.