Skip to content
Business Testing in Production

Business Testing in Production

In this blog post we are going to look at using JMeter to support business testing in production. This is a slightly different topic to the one discussed in this post on testing in production. The one in the link above is around running your performance testing in production for reasons that are discussed in that post. This post is going to focus on how you can leverage your performance testing tools to support this activity, as discussed above we will focus on JMeter in some of the examples. But any load testing tool, or even functional testing tool, can provide the same benefits.

Do YOU have custom Load Testing needs?
Rely on our Expertise

Business testing overview

Not all organisations do run business tests in production, if you do then you will understand the benefits. If your organisation does not, then this blog post may give you an understanding of the benefits. Once software is released into production the business often wants to run tests in production to ensure that the system is still functionally stable and that it continues to perform. You can argue that you would know if your production systems were not functionally stable or performing badly as firstly it’s in constant use and secondly you should have monitoring set up to measure performance thresholds.

What you find is that some business functionality is only executed infrequently and may not be executed on a regular basis. Equally small regression in performance is sometime not noticeable and without regular execution you are unable to see these trends. Sometimes your production systems may have a significant number of data updates of static data or monthly/annual batch jobs that change data sets, after these events you may want to run a series of tests to check for any detrimental effects. While these will be tested prior to go-live keeping an eye on production through a series of regular tests is a good practise to have.

Topics we will discuss

  • Test journey in production.
  • Provide data for your instrumentation tools such as Dynatrace and AppDynamics etc.
  • Generate statistics for production monitoring of data being surfaced in an analytics tool.
  • Generate test data for production monitoring, our production dashboards sometimes want specific data available.
  • Support troubleshooting production issues.
  • Sanity test production after a release, especially if CI/CD.
  • Create large volumes of real data for first go-live.

Journey Tests in Production

A good example of testing in production is to build a journey test through the application and execute this on a regular basis. This allows you to regularly check that firstly the application is behaving as expected. What we mean by this is that by running a test on a regular basis with known values allows you to check, for example:

  • Data being returned is always consistent.
  • Calculations are always correct.
  • Reports are accurate.
  • Consistent records are created.

The above are examples but what we are aiming to demonstrate is that if you regularly run the same test with the same values then your data created by the test should always be consistent. If the results deviate from your expected state, you may have a production issue that you have identified quickly and can address. These tests can be run weekly, daily or even hourly to give you confidence that your production systems are consistently stable. Your journey tests should use a known and consistent set of data, you want to make sure that you can differentiate between real production data and your self-generated data. You do not want the data your test journeys generate to be on company financial reports or be shown as live policies for example.

By using constant data for your production journeys, you can ensure this data is excluded. You would have built a set of tests as part of your performance testing of the application, and these can be used to execute this journey test. Your JMeter tests will be robust and constantly maintain for each release so using them to support this journey is a good way of additional value add from your performance test suite. You would clearly only execute these as a single user with a single iteration as you do not want to place any additional load on production.

Test data for Instrumentation

Many organisations use tools for instrumenting their applications. Examples of these are Dynatrace and AppDynamics. These applications, as well as being used to support your testing activities, are primarily used to actively monitor production. They are reliant on the regular ingest of data as for some of the monitoring it compares data not against a baseline set of values but based on historical data.

An example of this is how many of these tools monitor performance, they do this as a standard deviation change against the latest results rather than against a set threshold. It is all about changes against last known response times and if you do not have a regular set of data being produced these instrumentation tools are not as efficient as they can be. Building and running tests against these infrequently used business functions ensure that your production monitoring tooling is as efficient as possible. This is very similar to your journey tests mentioned in the above section. Use of existing performance tests to support this, or even creating new tests, is useful to support this type of production testing.

Analytical tools

Analytical tools provide insights into your production systems, examples being:

  • Volumes
  • Frequency
  • Geographical location of users
  • Trends
  • Patterns

How can running tests in production help with this you may be wondering. The answer if for sanity testing the metrics you are reporting on. Consider that you have built a set of clever visuals and reports based on your production throughput, how do you know its correct. Reporting is only as good as the data it represents, and it can be difficult to ensure that what you are representing is accurate.

By having a set of tests that run in production that use specific data values or cover specific journey you can ensure that this very specific data exists in the data sets that your reports use. This can then be used as a sanity check of the underlying data that your reports present. Data tables change over time as applications evolve and this simple trick to load known data using automation will ensure that any changes are picked up. You can query your data sets as you will have known values, generated by you, to search for.

Production monitoring dashboards

This is not dissimilar to the section above on Analytical tools. You will have management Information data that you extract from production and like the analytical data you will want to ensure that your management information data is accurate. You can use the same techniques as discussed above to ensure your dummy data is visible where it should be in the correct reports at the correct frequency. This gives you confidence that reporting that can be used to inform any number of business decisions is correct and accurate. For Production Monitoring and Data Analytics the re-use of performance tests running under low volumes can give you a variety of data that will allow you to ensure your reporting queries are accurate. They also provide you with a regular execution of your performance test which ensures that they are always kept up to date and ready to be used for their primary purpose of performance testing.

Troubleshooting production issues

In is inevitable that you will get application issues in production at some point. We are not suggesting these are all going to be performance related issues, they may be functional issues but if they are performance related then your performance tests can help. Obviously if they are functional then the functional testing team need to replicate and determine the root cause manually, and your performance tests will probably be of limited use. These production issues can be difficult to replicate in a test environment for any number of reasons.

These reasons can vary from data differences or volumes to software versions to platform differences. If the issue cannot be replicated in a test environment or the issue happens infrequently in production, then it may only be possible to track down the root cause in production. It is possible that you may need to write a new test to support the testing of this production issue and if this is the case it is worth adding this to your performance regression testing suite. We have made reference to this already but if you are running performance tests in production because there is no alternative then we have a blog post that is worth reading. Any new tests that you write to support this performance testing in production can be added to your performance testing suite which subsequently improves the coverage of your performance testing.

We have already discussed that you may not be able to replicate the production issue in a test environment and as you have a problem in production you want to fix as quickly as possible. Once you have found a fix, or in parallel with your production investigation, you should try and determine why you cannot replicate this. Spending time working this out is useful because you may have a discrepancy between your performance test environment and production and may have missed other potential issues in your performance testing. By learning from missed performance issues that do find themselves into production you improve the quality of your testing and environment and subsequently less issues, in future releases, are missed.

Continuous Integration / Continuous Delivery

There are several good blog posts that discuss the principles of CI/CD methodologies and performance testing on our website:

These are all worth a read if you employ these techniques in your organisation. When you are continually developing and releasing to production you need performance testing in your pipelines. And while having performance tests in lower environments is sensible having these in production is equally so. A small performance tests running in production as part of your deployment pipelines is sensible. These tests need to be as non-intrusive as possible with examples being:

  • API GET tests
  • Search
  • Basic navigation
  • Retrieving lists
  • Retrieving reports

As long as these are run under load and the response time meet your expectations against your non-functional requirements .

Data creation

This is a bit of an obscure one but is something that is normally overlooked. Performance tests by their very nature normally create data. If you need to create data for your production environment before go-live, or after a release that introduces new functionality and need seed data then your performance tests may be able to help.

Consider if you wanted to migrate from one system to another and did not have a robust migration strategy, your performance tests could load data into your new system based on extract from your old system. Your tests will replicate business processes meaning that the data you create will be accurate across all the application interfaces as you would have used the functionally tested business creation process. The advantage is this can be multi-threaded and run at high volumes significantly reducing the time the data load takes.


Using your automated tooling in production can be daunting, hopefully this blog post has given you some insights into how you can leverage the use of these tools, especially JMeter and performance test to make a difference to your organisations approach to this.

Want to become a super load tester?
Request a Demo