
Performance Tester Diary - Episode 5
Introduction¶
This is Chapter 5 of our journey of non-functional testing with Otto Perf. This is the last chapter in Otto’s journey of performance testing a new application for OctoCar.
In Chapter 4 Otto was looking at ways to check the impact of his performance tests on the infrastructure that the application was running on. He found out that the application had been instrumented using Dynatrace and he could use this to analyse all aspects of the application architecture.
He also discovered that with the use of a custom JMeter header in his test scripts he could easily identify the transactions that were generated as part of his performance testing. The ability to create custom metrics based on his tests and to produce dashboards meant that Otto was in a very good place when it came to monitoring.
The introduction of application monitoring and a subsequent increase in server capacity uncovered an issue with Otto’s reporting process. He had missed the fact that the application was regressing over time but because the transactions being measured were still withing their agreed non-functional requirement tolerances he did not notice this.
He did some work to ensure that he was tracking trends across his results to ensure that transaction regression would be picked up in the future. This trend analysis also offered the ability. To run tests at different times of the day and under alternative load profiles and determine the differences. In this Chapter we will see development finish and Otto will discover that the end of a programme is not the end of performance testing and learn about the benefits that the creation of a regression test can bring.
Otto will also find alternative uses for his performance tests outside of their primary purpose of generating load and concurrency and will start to understand that the assets you create when building performance tests can provide benefit to many other IT and Non-IT related activities.
Chapter 5.¶
Reuse of assets¶
The work on the new programme was coming to an end and Otto felt the performance testing had gone very well. He had found several issues early which had been fixed and the code that had already been promoted to production was showing no signs of performance issues.
Otto was keen to document all he had learnt as he felt that how the performance testing had been approached was a good example of how it should be approached in future programmes. One thing that had been mentioned to Otto was some of the non-performance test related problems that were troubling the infrastructure team and the user acceptance test team.
Otto had recently been reading an article on how test created as part of performance testing can provide additional benefits outside of their original purpose.
Authors note
The article is in the OctoPerf Blog pages
This had been something that Otto was keen to embrace because he was sure that all the effort, he had invested in the scripts must have uses once the programme was starting to wind down and it seemed that he could apply some of the things he had learnt in the article he had recently read to these problems. He had also started to think about what a performance regression test might look like and how it should be approached until he got sidetracked by these other issues.
Otto decided that he would think about regression testing later. Otto met with the user acceptance testing team who were struggling because the test environment they were using only had a small amount of good quality test data. They had a requirement to test a month end process before the next code release and did not know how they could generate the volumes of data they required. After the meeting Otto thought that his existing test scripts, because they included creating new data, could easily be used to support the data creation process.
He set about understanding the data volumes the user acceptance team wanted and very quickly and easily built a test that would generate the required volumes. Otto schedules this into his existing performance testing pipelines and over the course of a few days he ran his performance test for the sole purpose of data generation.
The infrastructure team’s problem was a completely different proposition, they were looking for a way of sanity checking the production environment on a regular basis and looking for a way of ensuring that the tolerances set in their production monitoring solution were accurate and would alert if exceeded. The production environment was still ramping up and had not reached the point of seasonal peaks, so they had no way of knowing whether their alerting was correct until later in the year.
The fact that this needed to be done in a live production environment did cause Otto some concerns and he wondered if there were any article on the subject. Luckily Otto stumbled on an article about performance testing in production which he read.
Authors note
The article is in the OctoPerf Blog pages
After reading this and re-reading the one on hidden benefits of performance testing, he thought he could help the infrastructure team. For the sanity checking problem Otto took a few of his tests, that did not create data and changed the load profiles to ensure they ran with only a single user for a single iteration.
Otto thought that if we are running a sanity test for the environment then we do not need to create any new data we will just query existing data and run some non-intrusive tests of which he had many. He spoke with the infrastructure team about how he could identify the transactions he was generating as part of his test in a custom header, Otto discussed this in Chapter 4.
The infrastructure team could create a custom metric from this header and track the transaction to ensure it was successful which would act as a check that the environment was available and stable. A pipeline was built and scheduled to run daily in production before the online day started and the dashboards created.
Otto was pleased that he had managed to find several other purposes for his performance tests scripts. The other problem the infrastructure team has took a bit more thought, he was confident that he could run a performance test in production to simulate the seasonal peaks load profile and to allow the infrastructure team to check the alerting tolerance were correct, but he did not want to create large volumes of test data in production.
After talking with the business an agreement was made that if the data being generated used a dummy customer, that could be excluded from all production reporting and batch processing and avoided some of the more complex reporting user journeys that a production level of load could be generated in production. The load would have to run out of hours and certain criteria were to be completed before, such as a full database backup and a halting of all batch schedules to ensure that the production environment could be recovered if necessary.
All the necessary pre-requisite activities were completed, and Otto scheduled a seasonal peak load to run overnight which was monitored in real time by the infrastructure team who were very happy that it had validated their alert monitoring. The production environment suffered no adverse effects, and the business were so impressed that they suggested that this could happen more often if there was a use-case for it.
I must start thinking about regression testing thought Otto.
Regression testing¶
Otto finally had time to consider performance regression testing and now the programme was winding down he wanted to implement this to ensure that performance testing was running against the application on a regular basis. While the programme was winding down the development work was not stopping, it would slow considerably but it was now a small feature release on a quarterly basis and the team had said that they wanted the ongoing performance testing to be run by Otto.
Otto was also about to start work on another programme, one that would see him employee all the same techniques he had on this one he hoped. So now he was responsible for performance of two applications he needed a way of ensuring that the small number of changes being built for the one he was just completing were regularly tested without the need for him to devote all his time to it.
He knew that there would be occasions when he would need to write new tests or update existing ones, but this would not be on a regular basis. He was looking for some guidance on the best approach for regression testing and found a good article on the approach and some of the benefits it provided.
Authors note
The article is in the OctoPerf Blog pages
After reading the article Otto decided that the best frequency for his regression performance testing was weekly. He would schedule the tests to run on a Sunday morning which would allow him to get a full week of development between test executions and allow him to analyse the results on a Monday morning.
Otto had not considered some of the benefits of a performance regression pack, he was wondering how he would ensure that his performance tests would remain current and working against the current code base and of course if you set them up to run as a regression pack you are regularly executing them which allows you to pick up failure and fix any script issues. O**tto was relieved that this meant that should he need to re-run the full tests again because the programme started up again for another large release then the performance test scripts would not require any maintenance.**
Otto was also glad that it meant that he would continue to monitor the results and keep his trend analysis graph going. Otto also explained to the development team that if he was unavailable and they want a performance test run outside of the regular execution cycle they could just trigger it themselves.
Conclusion¶
Otto on reflection was really pleased with how the performance testing had gone on the programme and had already started building al the best practises into a framework that could be used on all new programmes of work. He had begun the knowledge sharing across the wider Quality Engineering team and was encouraged that they thought it was a good process.
Otto felt that the programme was in a good place and knew that the ongoing regression would help him keep an eye on the performance of the application without impacting upon his time as he started to consider the new programme of work. This is the end of the series of blog posts on the adventures of Otto as he navigates performance testing in an agile way.
As stated in the first in this series we were looking to find a way to summarise many of the blog posts that we have already published into something that showed their practical application in a real-world scenario, because sometime when you read them in isolation it is not always obvious where they fit into a programme lifecycle.
So from Otto, for now, goodbye.