Skip to content

Load-Testing

Performance Tester Diary - Episode 2

This is Chapter 2 of our journey of non-functional testing with Otto Perf. You can read Chapter 1 if you have missed it.

Authors note

The article is in the OctoPerf Blog pages

Otto had previously got to the point where he had defined a set of non-functional requirements and risk assessed these with the programme team. Otto had also had the opportunity to be involved in the design process which had allowed him to encourage the architects, and development leads to consider application performance as part of their architectural design principles. In this chapter we will follow Otto’s progress as he starts to consider how he is going to write his performance tests and how he will incrementally write tests in line with the development activity. Otto will also consider how he can make sense of the performance tests results he will gather and how he will ensure that the infrastructure is fit for purpose.

Performance Tester Diary - Episode 1

This is the first in a series of posts that are going to follow the fictional performance tester Otto Perf. Otto manages all aspects of performance testing for a fictional company OctoCar that specialise in selling used cars.

OctoCar are building a new system to manage the selling of their fleet of cars, and we are going to follow Otto on his journey as he ensures that the new technology platform performs in line with agreed non-functional requirements under peak volumes of load and concurrency.

We wanted to document the journey of performance testing through the delivery lifecycle of a new application from start to finish and look at how the non-functional testing process could be followed as the application is delivered.

Our blog posts normally focus on a singular aspect of the non-functional testing lifecycle and do not always consider the wider picture of non-functional testing, something that this series of posts will do.

Volume Testing

In this post we are going to look at the importance of volume testing. We are going to consider how this type of non-functional test can have a significant impact on your size and scale your production environment based on evidence this test provides. We are going to look at some real-world examples of how getting your volume testing correct will ensure that your environments are not oversized or undersized. Both these scenarios have a cost which is both financial and potentially reputational.

Definition of volume testing

From Wikipedia.

Volume testing belongs to the group of non-functional tests, which are a group of tests often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data to assert the system performance with a certain amount of data in the database. Volume testing is regarded by some as a type of capacity testing, and is often deemed necessary as other types of tests normally don't use large amounts of data, but rather typically use small amounts of data.

We use this test to understand the impact of large data volumes on your application under test. Larger data volumes in the database can:

  • Increase the query time of your SQL statement which impact response times,
  • Impact the amount of CPU or memory your database uses and therefore you can use this test to size this component,
  • Impact the size of responses to end user and API requests, with search requests being an example, this impacts application CPU and Memory.

A volume test looks to not only understand performance of the SQL with an indicative sized database. But also looks to understand how much CPU and Memory is required across you full stack to support peak volumes of load and concurrency. The results you gather from this type of non-functional testing is important in determining how you application needs to be sized in production to support future data volumes.

Defining and Maintaining Performance Test Coverage

In this post we are going to look at performance test coverage. What functionality to performance test can range from very little to most of the application under test and both are valid under the right circumstances. We have talked about what to performance test in other posts available in the OctoPerf Blog but as part of a wider post about performance testing rather than as the subject of the post. This is an important topic and deserves a post devoted to it.

We are going to discuss the performance coverage topic through a series of questions which we will explore in detail. These questions are:

Playwright vs JMeter

In our quest to provide a comprehensive suite of tools for our load tester community, we, at OctoPerf, developed a full integration with Playwright. Playwright is an end-to-end testing library that has the ability to automate real browser actions. Which means that you will be able to do exactly that on OctoPerf, on top of all the advanced performance monitoring and report features that it already has.

In this article we will start with introducing browser based testing to better understand the advantages it can provide to a load tester. We will then compare a load testing campaign made with browser based and one with protocol based Virtual Users (VUs) to fully understand the differences.