Skip to content
Performance Test Strategy

Performance Test Strategy

A long time ago, Quality Assurance was executed after development. Performance testing was an activity executed when software was ready for production.

If a performance issue was found, most companies:

  • Fix the issue which means a complete new cycle including QA Tests and performance tests are required,
  • Or put the software live and decided to fix it as part of ongoing development,
  • Or borrow from the future. That's technical debt.

Let’s be fair: this approach isn't optimal.

The root cause is performance testing is not considered viable until the software reached a certain level of maturity.

The DevOps approach is exactly the opposite: performance test early.

It is known as shift left performance testing and it works it just needs Quality Assurance teams to think a little differently when it comes to performance testing and to have a performance test strategy in place to support shift left performance testing.

Let’s explore how to integrate load testing as part of your software development strategy.

Elevate your Load Testing!
Request a Demo

Methodology

Choosing the Right Tool

This is extremely important, use performance testing tools that easily integrate into Jenkins pipelines as this is an important part of our shift left performance testing strategy.

JMeter and Gatling are good options, open source is good.

Lots of community support and free to use plus many opportunities to give back to the open source community. There are many sources of information on how these tools can be integrated into Jenkins pipelines as the mechanics of integrating them will not be discussed here.

Using Maven to support your Continuous Integration development strategy in the form of building and organising your development patterns is important and again this is something that OctoPerf can support with their Maven Plugin.

Self Contained Tests

Build a performance test for each service or piece of functionality. Make tests self-contained so they are portable across any environments:

  • Create the data you need,
  • Use it in your test,
  • And remove it afterwards.

Run each performance test in all your test environments as soon as it is promoted from development at production levels of load and concurrency if your test environments can support this.

If they cannot then scale your test to volumes and concurrency that can be supported.

Eventually, you will have a number of individual performance test scripts that are regularly running against a code base that is constantly evolving. It garantees your test scripts are also evolving and stable.

Use Simplified Property Function throughout your tests to abstract users, load, duration, environment etc away from hard coded values in you test script, see this article on Flexible and Configurable Test Plans for information on how to do this.

Let’s assume you now have multiple performance test scripts all testing individual services, user journeys or database calls. Or any component that is part of your application under test.

If these were to be run in parallel then we have a performance test, it might not cover full functionality as some may still be in development, it might not be in an environment where full production volumes can be achieved.

You have run a test against an evolving application, but a performance test has been run. Analysis of the results can:

  • Help developers address performance issues early,
  • Gain confidence that their design patterns work,
  • And ensure that connection pooling design is correct and performs well.

This type of information early in the delivery cycle is invaluable.

Test Early - Fail Fast

This is a common DevOps mantra and fits in perfectly with this performance testing strategy.

Running performance tests early in the software delivery process highlights issues that can influence common development practises and coding techniques meaning that the application quality is hugely improved.

Discovering performance issues early in the delivery process is always cheaper in terms of resources and time to fix.

Test Script Flexibility

We have built tests we are running them regularly and we are therefore regularly maintaining them. As the code changes you immediately known that your tests work against the latest version and if not you fix them and re-run.

We are performing script maintenance on a daily basis which resolves another big problem with large scale performance tests that are run infrequently.

It can take a while for the scripts written against an earlier version of the code to work with the latest version and sometimes tests have to be completely re-written as the script maintenance work will take longer

The scripts now serve multiple purposes, they can be:

  • Run in isolation to test components and services,
  • Run in parallel to support early performance and load testing.

And, if correctly parameterised, flexible enough to run in multiple environments under multiple load profiles. Let's see how!

Execution Patterns

Run your tests in the Jenkins pipelines, maybe not triggered on each code check-in but certainly at least once a day if a code change has been made.

You can utilise the OctoPerf Jenkins Plugin to run load test at scale on the OctoPerf Platform.

You will start to see response time trends and find it easy to spot anomalies in system performance.

Run your tests in parallel as often as possible, perhaps at the end of each sprint or when a significant piece of functionality has been developed and is available to test.

Work with the functional Quality Assurance team to share knowledge and results.

Encourage all members of the programme team to take accountability for performance quality by sharing knowledge of how to build and execute scripts.

Formal Performance Testing

Whilst your performance testing strategy should be aimed at de-risking performance from the very early stages of the programme you will at some point need to run a more formal set of tests.

You will want to run a business peak hour, you will want to soak test the application, you will want to push the system to its limits under a scalability test.

In order to demonstrate how simple this can be with:

  • well maintained,
  • regularly run,
  • parameterised.

Performance tests script we will work through an example of these test scripts serving multiple performance testing purposes.

Concrete Example

Let’s assume you have 4 tests that cover:

  • 1x application user journey,
  • 2x internal rest service requests,
  • 1x web service request.

You maintain and execute these daily, maybe in the Jenkins pipeline maybe locally or on a remote server.

They cover the full business functionality of your application

You are unlikely to have this low a number of tests, but this is just for our example

Each test has 4 Simplified Property Function that are managed with a properties file that controls

  • service or application url
  • number of users
  • duration
  • throughput per minute

An example on how this can be achieved can be found in our article on Flexible and Configurable Test Plans Also, you will find a property file example attached here.

Let’s create an example file structure to hold our performance tests, remember we have abstracted the values from the tests/

Top Level Folder Structure

For our example, we will use JMeter tests and these will be saved under the tests folder.

Tests Folder Structure

Our data folder contains any supporting data files. Now our performance strategy is to be able to run these tests to support multiple test scenarios.

This is achieved through our data abstraction.

Let’s say that our service-test folder contains the properties files that support the individual service tests, the ones we mentioned earlier in this post whose objective would be to de-risk performance of each service or user journey during their development.

In here we would have 4 files that contain the values for each service test.

Properties Files Folder Structure

We could run these using a shell script if on Linux or a batch file on windows or using a tools such as the Taurus command line interface, equally we could run these from a Jenkins pipeline script.

Firstly let’s look at example of how the properties files may look

Properties File

Let’s assume that we have 3 more files that are similar in nature and if executed sequentially would test each of our 4 user journeys and services.

Before we show you how we can use this technique to build many different testing scenarios lets look at ways of executing these service tests.

As mentioned above there are many ways to execute these tests but for our example we will use a groovy script that can be run as a Jenkins pipeline.

Jenkins Pipeline Groovy

Let’s break down the script.

It’s a stage in a Jenkins pipeline.

stage('JMeter Performance Testing') {

Our file structure discussed above is located under jmeter_repo and we are working in the service-test folder where our properties files discussed above are located.

dir("${WORKSPACE}/jmeter_repo/service-test")

We iterate over the files in this folder.

files.each { fileinfo ->

If the name ends with .properties.

if (fileinfo.name.endsWith(".properties")) {

Run the JMeter script with the properties files as a command line option.

sh '/usr/bin/jmeter -t ../tests/' + fileinfo.name[0..-12] + '.jmx -p ' + fileinfo.name

Let's break this command line down

-t ../tests/' + fileinfo.name[0..-12] + '.jmx

Is the relative path to the tests folder and because the test name is the same as the properties file all we need to do is strip the .properties part from the file name and replace with .jmx.

-p ' + fileinfo.name

The final part is the properties file that we are using

We have just run a test that runs in sequence each service test against our application

Now let’s imagine that under our peak-hour-load-test folder we have the same 4 files only with different values, below is an example of what it may look like

Revised Properties File

And if we make a subtle change to our groovy script file.

Revised Groovy Script

Where we have changed the directory we are now using .

dir("${WORKSPACE}/jmeter_repo/peak-hour-load-test") {

and added an ‘&’ to the end of the line that performs the execution this, under Linux, means execute in the background and added a wait:

sh 'wait' 

To wait for all tests to finish before exiting the script.

So we have now demonstrated that by using the same performance test scripts using different properties file in different folders we can make and run performance scenarios from the same set of test assets.

More Flexibility

So we have demonstrated flexibility against the same set of performance test scripts, before wrapping up this post let’s see really how flexible we could make these tests in a DevOps world.

Let’s assume that we are using Jenkins pipelines to trigger our performance testing then we have the option to pass parameters into these.

We’ll add two string parameters to our Jenkins job one for test type and one for environment

Jenkins String Parameters

We’ll also add a further level of diversity to our file system. We’ll add environment folders to hold our properties files. As an example lets assume under soak-test you had 3 environment folders.

Revised Folder Structure

And that under each environment folder you had a set of properties files.

Environment Properties File

These properties files could contain an alternative url, or maybe alternative volumes and users if the environment had more capacity and resources.

We could for example have:

UAT Environment Properties File

In the uat folder and:

Pre-Prod Environment Properties File

In the pre-prod one. If we now change our groovy script file to use our string parameters:

Environment Variable Groovy File

Where we now use variables instead of hard-coded values.

dir("${WORKSPACE}/jmeter_repo/${test-type}/${env-name}") {

As long as you maintain your performance test scripts you can run any test you want in any test environment which is what agile performance testing is all about.

There is always the possibility to then have a production deployment check where you may have prod entry in your folder structure with a single user one iteration tests to prove that your deployment has been successful.

Prod Environment Properties File

Conclusion

Test early, fail fast makes better software. Build early, maintain regularly. When you need to use tests then they are available and work against the latest code base.

Re-use your test scripts these are your assets you maintain them, use them in different ways under different levels of load to support all your performance strategy. Embrace devops and the opportunities that it offers to performance testing.

Want to become a super load tester?
Request a Demo