Skip to content
Performance Tester Diary - Episode 2

Performance Tester Diary - Episode 2

This is Chapter 2 of our journey of non-functional testing with Otto Perf. You can read Chapter 1 if you have missed it.

Authors note

The article is in the OctoPerf Blog pages

Otto had previously got to the point where he had defined a set of non-functional requirements and risk assessed these with the programme team. Otto had also had the opportunity to be involved in the design process which had allowed him to encourage the architects, and development leads to consider application performance as part of their architectural design principles. In this chapter we will follow Otto’s progress as he starts to consider how he is going to write his performance tests and how he will incrementally write tests in line with the development activity. Otto will also consider how he can make sense of the performance tests results he will gather and how he will ensure that the infrastructure is fit for purpose.

OctoPerf is JMeter on steroids!
Schedule a Demo

Chapter 2.

Performance test strategy

The design of the application was nearing completion, and it would not be long before Otto would be able to start writing performance tests. \ While he was confident in what he had already achieved he still wanted to provide the programme with a bit more information on how he would approach the performance testing, how the tests would be run and how he would provide traceability of results back to requirements. Otto had recently read a blog post on how you can approach performance testing of large-scale programmes and had been encouraged by the fact that he had already completed several of the recommendations around non-functional requirements and risk assessments.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

The blog post went on to talk about iterative development which he knew the programme would follow as it was being built using agile principles and how you can build performance tests at a service or feature level and test these in isolation. These isolated tests can then be combined as they grow to form tests that run not only in isolation but in parallel.

This intrigued Otto as he as trying to determine the best way to build performance tests in an environment that meant that code would be built iteratively and released rather than testing either waiting and testing the application once a considerable amount of development had completed or having to test the full application when development was fully completed. Neither of which sounded ideal. Following some more online investigation he found another article that complimented the one above that gave some useful insights into building performance tests as each feature was being developed. It also discussed the principles of making these tests self-contained, so each test created the data they needed, ran the performance test and then removed the data after the test completed.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

This was not something that Otto had considered previously but it made sense. He could build a performance test as the development teams developed code at a very low level and as he built more and more of these self-contained performance tests, he could start to run these in parallel to build more and more complex scenarios. The blog post went on to discuss the principles of running these tests daily which meant that not only were the tests continually monitoring performance as the application evolved but meant that they were being regularly maintained and if the application changed as development progressed then the tests could be easily fixed and re-run.

This meant that Otto would be able to feedback to the Scrum team how the application was performing as it was being developed, and he would be building tests as the code was being developed meaning that if he found a performance related issue then this could be fixed immediately. Otto decided that this was a really good way to approach a programme that was following agile principles and spoke to the scrum team during a regular stand-up about this.

The team agreed that this sounded like a sensible approach.

Load profiles

Otto had his strategy for developing tests and then executing them as the code was developed. He was excited to start scripting, and he knew it would not be long Otto was still a bit confused as to how he would translate the non-functional requirements he had gathered into a performance test. If he was building tests as each feature was being developed, what rate should he be running the performance tests at, he had the raw numbers of transactions from the non-functional requirements, but these needed to be turned into transactions per minute or transactions per second, so the load was accurate.

He was also aware that his strategy involved regular execution of all performance tests that were being built and therefore he. Had to make sure that the tests were repeatable and consistent as he would be using the response times to determine if the features being developed met their non-functional requirement. Otto was also aware that while he was building individual tests as code was developed, he would after a number of these features were completed need to start running tests in parallel and would need to build performance tests, load tests, soak tests and scalability tests etc.

Whilst he was aware of the concepts of these types of tests, he was keen to find out more about them After some investigation Otto found some blog posts which gave him a clear understanding of how he could translate his non-functional requirements into a load profile he could regularly execute and helped him reinforce his understanding of what the more complex tests should look like once he had built up enough individual self-contained tests.

Authors note

The article on profiles is in the OctoPerf Blog pages The article performance test types is in the OctoPerf Blog pages

Results analysis

Otto was confident that he understood exactly how he would execute his tests now but was also keen to understand how he could take the response times he was gathering from his JMeter tests are analyse this to ensure that he was reporting back to the development teams an accurate picture of performance. He was aware that he would eventually be producing a significant amount of result data as the tests would be running daily and he would be developing more and more tests as features were code complete.

How would he deal with this, Otto decided that he would see what articles were available online to help him. He firstly found a blog post that gave him a clear view of how he could use the features that JMeter provides natively to help him analyse his tests, this was a good starting point for Otto as he was sometimes confused by what the native reporting meant.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

Otto was keen to expand his knowledge on results analysis as he had some knowledge of data analysis and understood the principles of statistical analysis but felt he could do with refreshing his knowledge. He knew that you do sometimes get anomalies in response times when testing under load and was aware that you should be measuring transaction times using a percentile calculation rather than the maximums, but his knowledge was a bit sparse.

He found an article on statistical analysis specifically related to performance testing and found the content was exactly what he was looking for.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

In addition to the principles around statistical analysis there was a working example of how you can calculate the values based on a data set. Otto completed his investigation into results analysis by reading an article on how you can aggregate and compare multiple sets of test results and look for trends in performance response times.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

This article gave a real-world example of how data can be stored and compared from a database and how you can use your performance test result data to quickly and effectively look for performance improvement, parity or regression. Otto used the examples and managed to write a dummy test that would output results to a database for statistical analysis.

Once he started building his tests, he would use this technique to analyse the results.

Infrastructure sizing

After meeting with the development team, it became clear to Otto that while the application will eventually be monitored for CPU and Memory resources this would not be the case at the start of the development.

Eventually both the non-production and production instances of the application would have real-time monitoring but not at the start. Otto had read a blog post where the author suggested that it is the responsibility of performance testing to help size the environment and performance testing on infrastructure that was not able to support the load could lead to misleading results.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

Otto spoke with the architecture team who had stated that they had an indication as to the size and quantity of application servers and database servers etc, but it was possible that the application may behave differently under load. The architecture team spoke at length to Otto and demonstrated that while the application was not yet being monitored through a production monitoring solution that it was possible to get CPU and Memory metrics from the servers and monitoring the database and queue depths.

Otto now understood that when running performance tests, especially in a iterative development framework you need to be able to monitor and change the infrastructure resources as you run performance testing as if you do not have enough resources then response times will suffer and to many resources will lead to your application being oversized and costing you more to run than is necessary. This was something that Otto had not always associated with performance testing but was now starting to understand that they are very much linked.

Modularisation

The day had arrived, code was being developed, and Otto was eager to start writing and executing performance tests. After a week of writing and executing performance tests against code being developed according to the programmes agile principles the testing was going well and Otto had found the research he had done into strategy, load profiles, results analysis and monitoring was paying dividends.

  • He had built several tests but was finding that the code was changing and that he was having to update his tests on a regular basis to account for an ever-changing application.
  • He was happy with this as this was one of the trade-offs between testing early and finding issues early that could be easily fixed or testing late and finding issues that may not be able to be fixed before go-live.
  • He had started to notice though that some features were being used in multiple tests and some of the samplers in many of his tests were duplicated.
  • He had not really thought about this until he needed to update these parts of the test’s multiple times.
  • He felt there must be a way of tests being built that meant that common functionality could be shared and therefore a change to the shared code only needed to be updated in one place and not multiple.

So, he spent some time investigating and found a blog post on modularisation and how to approach this using JMeter.

Authors note

There is an article on this very subject in the OctoPerf Blog pages

Otto worked though the examples in the article and it all made sense, he was quickly able to replace the common routines across many of his tests with fragments and found that to make a single change that affected many tests was a much better use of his time than trying to update many tests.

Conclusion

The techniques that Otto has employed in his approach to performance testing are beneficial for both agile and waterfall approaches to application development.

There is not one single correct way to build performance tests and equally no single strategy that fits all programmes.

We do believe that those we have discussed and referenced in Otto’s journey will help improve any programme when used correctly and even considering some, if not all, will help you better understand how you can make your performance testing as efficient and effective as possible. Join us soon for Chapter 3 in the performance testing adventures of Otto Perf.

Want to become a super load tester?
Request a Demo