Skip to content

Load-Testing

Performance Testing for large scale programmes

In this post we are going to look at performance testing on large scale programmes.

A few the posts we write define techniques and approaches based on a single application under test but sometimes you are faced with the prospect of performance testing:

  • A new solution that replaces several legacy applications,
  • A service migration from once cloud provider to another or one data center to another,
  • An infrastructure update that covers multiple applications or services,
  • A new solution that compliments and integrates with existing software.

Now, especially in the case of migration of services, performance is key, and you cannot afford to see a degradation in performance as the business users will have already become accustomed to the software and how it performs.

You can look to make it perform better but it is unlikely they will tolerate poorer performance just because you have migrated from one platform to another.

Equally, new solutions that replace legacy application will (rightly or not) be expected to perform better than their predecessor which is a challenge as your new solution will undoubtedly have a different workflow and approach to delivering what the end-users want.

These types of large-scale programmes can on the surface seem complex from a Quality Assurance perspective and we have put together this guide to help you understand some of the techniques you can use to ensure that the performance testing aspect of the testing is manageable and not overwhelming. We have set out in the sections below things to consider to assist in the performance testing of large-scale programmes of work.

Open source Load Testing tools comparative study

When using a testing tool, it is only logical to trust its results. And the more well-known the tool is, the more trust we put in it. Furthermore, how could we know it is wrong ? After all, who is in a position to judge the judge ? This phenomenon is particularly true in the load testing community, since the field is still something of a niche among the testing world. Finding deep-dive studies about the actual technical aspect of load testing is difficult.

Those observations led to the creation of this study. In this article, I will compare the results obtained for the exact same load test using 4 different open-source load testing tools: JMeter, Locust, Gatling and K6. These softwares were chosen because they are among the most used and/or discussed in the community, but in the future the goal will be to add others, including the ones that are not open-source.

The goal of this comparison is not to point any fingers and decide which tool is right or wrong. The objective is to try to understand what we measure within each tool, and what it means for our performance tests.

Performance Testing in application Design

There are many articles on the huge benefits of performance testing integrated into the development process and the concept of shift-left performance testing.

We have also discussed the concepts of Load Test Driven Development which involves the creation of performance testing in parallel with the code development and sits alongside Test Driven Development.

We are going to consider in this post how involving performance testing resources in the application design process can be a benefit.

Load Test Driven Development

We are going to explore whether Load Test Driven Development is an idea that would be worth pursuing for your organisation.

We will recap on what Test-Driven Development (TDD) is in the next section but fundamentally

Test-Driven Development is a philosophy and practice that involves building and executing tests before implementing the code or a component of a system

Now when you think about this, does it make sense to try and run a performance test before we have developed any code?

We think it does and we are going to explain why. For clarity we are not suggesting that Load test Driven Development should replace TDD but rather to compliment it.

Risk Assessment In Performance Testing

Performance Testing coverage needs to be defined from the application under tests functional requirements and this does not change regardless of whether you are following an Agile or a more Planned approach.

The Risk Assessment process defines what performance testing needs to be executed and the order in which this should be approached, the reason we use the functional test requirements for definition of coverage as these functional requirements define what the system does whereas the non-functional requirements are what you use to measure your tests against when you execute them.

As discussed the Risk Assessment is a way of identifying where your performance testing effort should be focussed and prioritising the order in which your performance tests are written and therefore executed so you ensure that you are focussing on the riskiest, from a performance perspective, aspects of the system first; we will expand on what we mean by riskiest later in this post.

A Risks Assessment should be done as early in the project or sprint, if you are working in an Agile methodology, as is possible to maximise its benefits.

The first thing we will discuss is the differences between Agile and Non-Agile Risk Assessment as this is an important concept but only as far as what is assessed and when, the principles of how and why we carry out a risk assessment will become clear as you read this post and remain the same regardless of your approach to development.