Skip to content

Load Testing Blog

Uncommon Performance Testing

In this blog post we are going to look at some of the uncommon performance tests. By this we mean those scenarios that are not what we believe are commonly executed but those that are run periodically at best.

These uncommon scenarios should not necessarily take priority over the more common performance scenarios. They do add value by stressing parts of your application under test that may be missed by the more conventional tests.

We will discuss each scenario in turn and look at the benefits and some of the difficulties you may experience in designing these scenarios. We will also take time to give examples of when these scenarios would be useful.

Continuous Delivery Test Reporting

We have spoken in a previous blog post about documentation about how we believe that if you are building performance tests that support either Continuous Integration or Continuous Delivery then having to produce performance testing documentation before and after each test does not fit with this methodology.

This got us to thinking that if we truly want to use an agile framework and continually push code to production then we need to also consider how we monitor and report on our performance test results produced from our testing tools.

If you have an agile testing process, then you will have functional tests built into this deployment process that run in each environment and if you have a mature robust process then this process deploys to production with no manual intervention if all the functional tests have passed along with any deployment checks.

Now in reality we need to have a performance testing stage in this process as well as each release needs to go through a performance test, whether that is a full suite of peak volume, soak test, scalability tests scenarios or whether that is a regression tests of core functionality.

And this is the bit that we are going to discuss in this blog post because if your organisation wants to push code directly to production then you do not have the luxury of running your performance test, analysing the results and writing up a report to be signed off.

Updating JMeter Performance Tests with an XML parser

When building performance tests, we all understand the value of using properties or variables to store static values outside of our tests. This ensures that any changes to these values need only be made in one place rather than having to make these changes in many tests.

Sometime though you may have inherited a suite of JMeter tests, or you were ** under pressure to develop these tests** and in order to do so you hardcoded values in your tests. This means that if anything changes, an endpoint or the server-name or even the payload of a sampler then you need to make changes to these static values in your tests.

Performance Response Times

When performance testing you need a set of requirements to measure your response times against. When defining these you should do so with your end users or business teams.

It is relatively easy to predict volumes, load and users that will use your application as you will no doubt have some data based on your current systems. It is a lot harder to agree on what the response times of your application should be. Without this critical measure we really cannot say for certain that an application or service performs. The reason being that we are not measuring the response time under load against a value that has been agreed upfront.

We need to be able to answer these questions:

  • At what point does a response time become unacceptable?

  • How can response times be defined in the requirements gathering stage?

  • How can we ensure there is a measure to test against?

We are going to look in this post at how we might categorise response times based on the interaction model with the end-user. And use these metrics to build response time requirements as part of your wider non-functional requirements.