Skip to content
Defining and Maintaining Performance Test Coverage

Defining and Maintaining Performance Test Coverage

In this post we are going to look at performance test coverage. What functionality to performance test can range from very little to most of the application under test and both are valid under the right circumstances. We have talked about what to performance test in other posts available in the OctoPerf Blog but as part of a wider post about performance testing rather than as the subject of the post. This is an important topic and deserves a post devoted to it.

We are going to discuss the performance coverage topic through a series of questions which we will explore in detail. These questions are:

Elevate your Load Testing!
Request a Demo

Is it a new application or is it a new release to an existing application?

This is a good question to start on.

If it’s a new application, something your organisation is building then you will have to performance test the whole application. You do not need to performance test all the functionality as that is not the aim of performance testing, but you are going to have to risk assess the whole application to determine where the performance risks are.

There is a post about how you can look at risk assessment and one that will help you build a set of non-functional requirements. It is worth reading these if you are performance testing a new application as this will help you understand what functionality you need performance test and how to clearly document this to stakeholders. Once you understand what you need to performance test you can start to build your tests.

If it’s a new release to an existing application then clearly you will have performance tests that were built or the initial, or all previous, releases. And it’s possible that you have a performance regression suite of tests that run on a regular basis. If you don’t have a performance regression test suite then it’s worth reading this article to understand the benefits of creating one.

But it does boil down to the same thing which is that you should only need to focus on the new functionality in terms of your performance testing coverage. Before we move on to the next section it is worth highlighting that if there is a significant time between the last, or initial, application release then your technology development principles may have changed, and you may now be operating in a more Agile way. If this is the case there is an article here that will help you understand how you can move your performance testing in a Continuous Integration / Continuous Development mindset.

Do you have existing performance tests for the application?

This is a bit of an extension to the above section. We touched on this above but what we want to discuss here is that it’s possible that some of your existing performance tests are no longer relevant.

This is less likely to be the case if you have a performance regression suite ,and you run them on a regular basis. But if you do not then performance test that once were required for an earlier release may no longer be required. his could be for a few reasons, some examples of which are:

  • The release you are testing has changed the functionality the test covered.
  • The business functionality is low volume in production, and you do not feel it needs a performance test.

It is always worth regularly checking what functionality your performance test cover and determine if the volumes of activity for this functionality is sufficiently high enough to keep this test active and maintained.

You may be wondering why you would have created the performance test for low volume functionality in the first place. Sometimes with applications that provide new functionality the business expectations that you base your performance requirements on does not match the reality of how the application is used. This is very common, and you will often find that expectations around volumes before an application or piece of functionality goes live does not always match reality.

Is this a small change or part of a wider bigger deliverable?

When considering what coverage your performance tests need to include this should also be considered:

  • If it’s a small change which might be a compliance, security, or regulatory update then it’s possible that there is no need to add any additional performance tests to your existing suite.
  • If the change does not fundamentally change the functionality and the volumes and concurrency levels are low, then you may choose to not performance test this at all.
  • If the small change is part of a wider deliverable which is being deployed incrementally to you test environments and the impact of the new change once all increments have been completed will affect load and performance. Then it is worth including this functionality in your performance test suite as the earlier you test the quicker you can determine if there is a performance issue.

Are you in a Waterfall or Agile CI/CD delivery cycle?

The reason for this being a consideration when determining the coverage your performance testing should contain might not be obvious from the outset.

You may be thinking that regardless of the delivery methodology being used to deliver the functionality the result will be the same, therefore the coverage for my performance testing should be the same.

This statement is correct, the eventual coverage of your performance tests once fully delivered will be the same whether Waterfall or Agile but during the delivery process you may find you have to write more performance tests if you programme, or project is delivering in an Agile way. When performance testing in an Agile framework the ambition is to run all testing, including performance testing, as early in the development lifecycle as is possible.

Info

There are a couple of blog post here and here outlining how you could approach this.

What tends to happen is that you are building performance test for each small component or unit of development to satisfy the Continuous Integration or Continuous Delivery framework requirements. You end up with a large amount of isolated and targeted performance tests that may no longer be required once enough functionality has been delivered. The reason being that they will be replaced with a single performance test that focuses on the functionality the multiple components or units deliver. o, while the end results in terms of performance testing coverage is the same, in an Agile framework there may be more tests written but discarded as they are replaced with a single all-encompassing performance test.

Is it commercial software?

Depending on your organisation’s relationship with the vendor very much depends on whether you can run any meaningful performance testing on 3rd party commercial software. The critical question is where the software is hosted.

If the 3rd party is providing a managed service and hosting the application for you then they may be unwilling to provide you access to performance test against the application. This may be down to cost of hosting a performance sized environment or it may be the impact on their internal network of you running high volume tests. It may be that you application is hosted with other company’s instances of the application, and they do not want your tests to impact live running services.

There are many reasons why this may be the case. If you are unable to execute performance tests, then while you will not have any performance test coverage of your own you will be able to ask the 3rd party to provide metrics of performance based on your expected load profiles. Therefore, the work you do to define the performance test coverage and the volumes you would run your tests at should still be documented and given to the 3rd party to determine if they can support these. If you can run performance tests, then this will make no difference to your performance testing coverage outside-of determining if it needs to be expanded.

Will the change significantly increase volumes or concurrency?

  • If you are implementing a change against an existing application or service and the change will make no difference to the volumes or concurrency of users, then there is no reason to change your current performance testing coverage.

  • If you already have a test that covers the functionality in your performance testing suite then you considered it important enough to write a test for originally therefore you should continue to execute it.

  • If it was not originally considered in scope for performance testing and the volumes are not changing, then there is no reason to add.

  • If it was not originally considered in scope but the change being implemented will see load and concurrency increase, then you need to re-assess and determine whether it should be added to your performance test coverage.

  • If you assess software changes robustly and follow your risk assessment process you do not naturally need to add new tests for every change.

Is it cloud based, i.e. is it easy to scale or hosted on physical hardware?

This seems an odd question to ask in terms of how it will affect your performance testing coverage. It does need to be considered though as it is possible that you may not have a performance testing environment in which to test against.

Let’s assume this is a change to an existing application and originally performance testing to place against what would become the production environment. And as this is now the production environment you cannot performance test against this anymore.

Info

That said, and before we move on, there is an article that discusses how you could performance test in production should this be the only option.

But let’s assume for now that you therefore have no performance test environment then you should, like the previous section, perform a risk assessment. Using this risk assessment, you can determine if you feel any changes being implemented would have an impact and ensure you raise there as risks. So, while you are not physically running any performance testing you are expanding your coverage but with no way to execute the additions. Obviously if it is cloud based you can just scale and execute your full performance testing suite, including any additions.

Conclusion

We hope that this blog post gives some insights into how you can ensure that your performance test suit provide the correct amount of coverage and so tips on how you can continue to ensure it is relevant as your applications under test have new software release or hardware upgrades.

Want to become a super load tester?
Request a Demo