
Performance Tester Diary - Episode 3
This is Chapter 3 of our journey of non-functional testing with Otto Perf.
Otto was extremely busy in Chapter 2 where he had built the strategy, he would adopt for the performance testing and had considered the load profiles he would use to generate load against the application. Otto also made a start in Chapter 2 on how he would aggregate and report on the tests he was executing and how he could compare tests results from previous tests execution cycles. He found out that his performance testing would influence how the application infrastructure would be sized in the production environment.
Otto also started writing tests and discovered that approaching these in a modular fashion can lead to significant time saving when the application changes, which is does when performance testing early in the delivery lifecycle. In this chapter we will follow Otto as he discovers how he can execute his tests using his GIT repository and Jenkins. And how he can ensure that performance testing is integrated into the Continuous Delivery / Continuous Integration (CI/CD) framework that the development teams are working in.
Otto will also discover what push to prod pipelines are and how performance testing can be included in this approach to production deployment.
Chapter 3.¶
Integration with GIT and Jenkins¶
The development of the application was, as already mentioned in Chapter 1, being development using Agile development principles. Otto, whilst happy with the way he was developing tests, was less happy about the way he was having to manually execute them from his local machine.
The application was hosted with OctoCar’s cloud provider, and Otto felt that there is where the performance tests should be executed from. Otto had read an article about Scalability testing where the application is stressed incrementally until either application failure or the point at which you reach your goals of future load volumes.
Otto made a mental note that this would be a good test to execute but had picked up on the point that sometimes your performance test response times can be bad because the machine from which you are running your tests from does not have enough resources and not because your application cannot handle the load.
Authors note
The article is in the OctoPerf Blog pages
Otto had never considered this before, but when he thought about it, it made sense. He considered whether he could feel confident running tests from his local machine as the number of tests increased and as the levels of concurrency increased. Otto wanted to execute his tests from the cloud, and he wanted to build a process that would support the ability to test regularly without the need for manual intervention.
The volume of development had recently increased, and Otto’s time was fully occupied building tests to cover the new functionality and fix any changes to existing tests caused by code changes. He wanted to execute all his tests once a day, preferably overnight as the application would be stable and not subject to regular changes and application re-builds. This would allow him to evaluate the results in the morning and focus his efforts on the development of tests and scenarios.
But how to start?
It was then he found an article that gave a set of clear instructions on how to do exactly what he was looking to do and more. The article discussed the use of using a GIT repository and Jenkins to support performance testing. OctoCar used Jenkins as their automation tool for building code and deploying applications through automated pipelines, and it had not occurred to Otto that the same process could be used for performance testing.
Being embedded in the development team was a real benefit to Otto, and to now start to use the same technology the team used for code deployments for his testing was something he was excited about. He also had not considered storing his performance tests in a GIT repository, and they were currently stored on his local machine. After reading this article, the benefits of storing his performance tests in GIT were obvious, not only from a versioning perspective but also the fact that these could easily be integrated into his Jenkins pipelines from hie performance test repository.
Authors note
The article is in the OctoPerf Blog pages
The article even provided guidance on how the pipeline can be scheduled to run at a certain time of the day and certain days of the week using a CRON schedule. While Otto was moving all his performance tests to his GIT repository and building his Jenkins pipelines, he stumbled on another article that discussed the abstraction of the load profile values from the JMeter test to a separate file that could be easily configured.
Authors note
The article is in the OctoPerf Blog pages
This, though Otto, is great. I have moved all my performance tests into GIT and scheduled their execution at a time that suits me which is all managed by a Jenkins pipeline. He was using the same technology stack that the developers were using to deploy code, and therefore the obvious benefits would be embraced by the development team he was working in as well as being able to get any guidance on best practices from the team.
He had also successfully moved all the load profile values to an external file which he could update easily and commit the changes to his GIT repository to change the level of load and concurrency easily.
Integrating with Continuous Integration and Continuous Delivery¶
Otto had to attend a meeting of the Scrum team where the subject was Continuous Integration and Continuous Delivery. He did not really understand what this meant initially. After the meeting Otto understood that it meant the ability of the team to continually push changes from development to the test environments, through to the staging environment and subsequently into production.
This required a rigour around all testing whether Unit, Functional or Performance that ensured that testing was run regularly and the process of pushing change to the end users would not be unduly impacted by a long-winded testing process. The programme was fully aware that testing was important and there was no implication that is should be by-passed, but they wanted the testing to be as agile as the development.
Otto could not believe it; he was already doing this. His previous piece of work to make life easier for himself to by integrating with GIT and Jenkins meant that he was already set up to support a Continuous Integration / Continuous Delivery approach to code promotion. This was great news with one exception, as the test numbers grew, he was conscious that the time taken to analyse the results would increase and may make his process a bottleneck to the agile testing process.
He knew there must be a solution and set about investigating. He then stumbled on an article that discussed the principles of only reporting on failure, where failure was a response time exceeding its non-functional requirement or response code failure or failure against response time trend analysis.
Authors note
The article is in the OctoPerf Blog pages
This made sense to Otto, if he implemented some bespoke reporting into the results produced by his performance tests run through the Jenkins pipeline that analysed the results did the reporting for him and subsequently only reported on failure he would save so much time. Otto had been so used to analysing results himself he was concerned that if he was acting on the results generated by code then the analysis might be incorrect, and he might be allowing poorly performing code to be deployed to higher environment and possibly even production.
He bought this up with the development teams and was re-assured that if the analysis and reporting solution he put in place was subject to a code review by the team and a sample test run to verify the results that this was a good approach to streamline performance testing. Otto felt like the integration of performance testing in the development teams was not only beneficial because of the early exposure to the code and the ability to test early but also because it exposed him to practises that are regularly employed in development that are sometimes missing from testing.
Adding JMeter tests to push to prod pipelines¶
Everything was going well, and the performance tests were running daily and response times compared to previous results and non-functional requirements using code. One morning at stand-up the team started to discuss push to production pipelines, Otto was intrigued.
When he understood what was being discussed he took some time to determine what the approach from a performance testing perspective should be. What the team were proposing was that some small code changes should be delivered straight from a code check-in via a pull request, or merge change, in GIT through to production with no human intervention.
This was a very interesting proposition; he had to ensure that a code release met its non-functional requirements without being able to check the outcome of his bespoke reporting. Further investigation was required, and as normal Otto found a solution online.
The principle was that the build pipeline for the code deployment could at a certain point call another pipeline and only proceed on the basis that the called pipeline passed. Therefore, once the code had been deployed to the test environments, where he ran his performance tests daily and had a series of performance baselines and benchmarks, he could run any number of his performance tests and analyse the results in the pipeline before passing or failing.
With his current performance testing framework, he was nearly there, all that he needed to do was get the pipeline to report pass or fail based on his bespoke reporting solution and he was ready to go. With the help of the team this piece of the puzzle was implement using already existing Jenkins libraries that marks any pipeline as Passed or Failed and the solution was complete.
Authors note
The article is in the OctoPerf Blog pages
Otto was relieved that only small code changes were deployed this way, but the solution was robust and worked well. The performance testing part of the push to production pipeline worked well and picked up issues where it should have and let the deployment through when it should have.
Conclusion¶
What Otto has learnt in Chapter 3 have really pushed the boundaries on the way that he has considered performance testing. He was bought into the idea of performance testing early and regularly, but those principle combined with the integration of performance testing into a framework that uses technology that is widely used to build code and deploy applications was something that Otto was really excited about.
He felt that performance testing at OctoCar had advanced since the integration into the development teams and that unless the approach to development changed there should be no change to the current approach. We do believe that the framework we have discussed and referenced in Otto’s journey will help improve any programme when used correctly. We are conscious that not all organisations use the technology stack we are referring to, but the same principles apply to any technology it’s just the implementation that will change.
Join us soon for Chapter 4 in the performance testing adventures of Otto Perf.