Skip to content


Uncommon Performance Testing

In this blog post we are going to look at some of the uncommon performance tests. By this we mean those scenarios that are not what we believe are commonly executed but those that are run periodically at best.

These uncommon scenarios should not necessarily take priority over the more common performance scenarios. They do add value by stressing parts of your application under test that may be missed by the more conventional tests.

We will discuss each scenario in turn and look at the benefits and some of the difficulties you may experience in designing these scenarios. We will also take time to give examples of when these scenarios would be useful.

Performance Response Times

When performance testing you need a set of requirements to measure your response times against. When defining these you should do so with your end users or business teams.

It is relatively easy to predict volumes, load and users that will use your application as you will no doubt have some data based on your current systems. It is a lot harder to agree on what the response times of your application should be. Without this critical measure we really cannot say for certain that an application or service performs. The reason being that we are not measuring the response time under load against a value that has been agreed upfront.

We need to be able to answer these questions:

  • At what point does a response time become unacceptable?

  • How can response times be defined in the requirements gathering stage?

  • How can we ensure there is a measure to test against?

We are going to look in this post at how we might categorise response times based on the interaction model with the end-user. And use these metrics to build response time requirements as part of your wider non-functional requirements.

Performance Testing in Production

In this Blog Post we are going to discuss performance testing in production.

Now before you think we have gone mad and lost our minds completely this is not as crazy as it sounds.

Production is an environment that:

  • Is sized accurately,
  • Has a suitable diversity of data,
  • Has the correct data volumes,
  • Is on the correct infrastructure.

All the things you spend a significant amount of time getting right in your performance testing environment and that can be difficult to achieve.

Performance Testing for large scale programmes

In this post we are going to look at performance testing on large scale programmes.

A few the posts we write define techniques and approaches based on a single application under test but sometimes you are faced with the prospect of performance testing:

  • A new solution that replaces several legacy applications,
  • A service migration from once cloud provider to another or one data center to another,
  • An infrastructure update that covers multiple applications or services,
  • A new solution that compliments and integrates with existing software.

Now, especially in the case of migration of services, performance is key, and you cannot afford to see a degradation in performance as the business users will have already become accustomed to the software and how it performs.

You can look to make it perform better but it is unlikely they will tolerate poorer performance just because you have migrated from one platform to another.

Equally, new solutions that replace legacy application will (rightly or not) be expected to perform better than their predecessor which is a challenge as your new solution will undoubtedly have a different workflow and approach to delivering what the end-users want.

These types of large-scale programmes can on the surface seem complex from a Quality Assurance perspective and we have put together this guide to help you understand some of the techniques you can use to ensure that the performance testing aspect of the testing is manageable and not overwhelming. We have set out in the sections below things to consider to assist in the performance testing of large-scale programmes of work.

Open source Load Testing tools comparative study

When using a testing tool, it is only logical to trust its results. And the more well-known the tool is, the more trust we put in it. Furthermore, how could we know it is wrong ? After all, who is in a position to judge the judge ? This phenomenon is particularly true in the load testing community, since the field is still something of a niche among the testing world. Finding deep-dive studies about the actual technical aspect of load testing is difficult.

Those observations led to the creation of this study. In this article, I will compare the results obtained for the exact same load test using 4 different open-source load testing tools: JMeter, Locust, Gatling and K6. These softwares were chosen because they are among the most used and/or discussed in the community, but in the future the goal will be to add others, including the ones that are not open-source.

The goal of this comparison is not to point any fingers and decide which tool is right or wrong. The objective is to try to understand what we measure within each tool, and what it means for our performance tests.