Performance Testing and ChatGPT
Performance testing applications requires a set of skill that are build and gathered over many years of studying and using the various techniques and tools that are required to make sure the application you are testing is fit for production. Now we have all heard of Artificial Intelligence (AI) and the many tools and companies that now exist in the AI space.
Based on a quick look on the internet there are around 15,000 AI startups in the United States alone. So surely with all that technology at our disposal we should be able to use these AI tools to define our performance tests meaning that anyone can determine what performance testing should take place regardless of experience and training.
To build, execute and analyse these tests still requires a competent performance tester but the definition of what should be done could be handed over to Artificial Intelligence right? Let’s find out shall we, we will use ChatGPT as this commonly available and probably the one most have heard of.
How does Artificial Intelligence, specifically ChatGPT, work¶
From this source:
ChatGPT works through its Generative Pre-trained Transformer, which uses specialized algorithms to find patterns within data sequences. ChatGPT originally used the GPT-3 large language model, a neural network machine learning model and the third generation of Generative Pre-trained Transformer. The transformer pulls from a significant amount of data to formulate a response.
ChatGPT now uses the GPT-3.5 model that includes a fine-tuning process for its algorithm. ChatGPT Plus uses GPT-4, which offers a faster response time and internet plugins. GPT-4 can also handle more complex tasks compared with previous models, such as describing photos, generating captions for images and creating more detailed responses up to 25,000 words.
ChatGPT uses deep learning, a subset of machine learning, to produce humanlike text through transformer neural networks. The transformer predicts text -- including the next word, sentence or paragraph -- based on its training data's typical sequence.
Training begins with generic data, then moves to more tailored data for a specific task. ChatGPT was trained with online text to learn the human language, and then it used transcripts to learn the basics of conversations.
Start at the beginning¶
Let's ask ChatGPT our first question.
I want to performance test an application what should I consider
This is our response
Performance testing is crucial for ensuring that an application meets its performance requirements and can handle the expected load. Here are some key considerations and steps to perform effective performance testing:
Define Performance Goals: Clearly define the performance goals for your application. This could include response time, throughput, resource utilization, and scalability targets.
Identify Test Environment: Set up a test environment that closely mirrors the production environment in terms of hardware, software, and network configurations.
Choose Performance Testing Tools: Select appropriate performance testing tools based on your requirements. Popular tools include JMeter, LoadRunner, Gatling, Apache Benchmark, and others.
Create Realistic Scenarios: Develop realistic usage scenarios that simulate the expected behavior of real users. Consider different user profiles, transaction types, and usage patterns.
Determine Load Levels: Determine the expected load on the application. This could involve simulating the number of concurrent users, transactions per second, or any other relevant metric.
Execute Load Tests: Run tests with various levels of load to identify how the application performs under different conditions. This helps in determining bottlenecks and weaknesses in the system.
Monitor System Resources: Monitor key system resources such as CPU usage, memory utilization, disk I/O, and network bandwidth during the tests. This helps in identifying resource bottlenecks.
Analyze Results: Analyze the test results to identify performance issues. Look for trends, patterns, and anomalies in the data. Common performance metrics include response time, throughput, error rates, and resource utilization.
Isolate and Reproduce Issues: If performance issues are identified, isolate and reproduce them in a controlled environment to better understand the root causes.
Tune and Retest: Once issues are identified, work on optimizing the application, server configurations, or other components. Retest to ensure that the optimizations have a positive impact on performance.
Scalability Testing: Evaluate how well the application scales by gradually increasing the load until performance starts degrading. This helps in determining the application's ability to handle growing user loads.
Stress Testing: Conduct stress testing to determine the breaking point of the system. This involves testing beyond the expected load to identify how the system behaves under extreme conditions.
Security Testing: Consider incorporating security testing to ensure that performance optimizations do not compromise the security of the application.
Documentation: Document all test scenarios, methodologies, configurations, and results for future reference and to facilitate collaboration with development and operations teams.
Remember that performance testing is an iterative process, and ongoing monitoring and testing are essential to ensure continued optimal performance as the application evolves.
On the surface this seems a sensible answer and certainly provides a starting point for defining a suitable performance testing strategy for an application.
Let’s consider each one in turn and see if we agree with the statement and consider if anything has been missed.
Define Performance Goals¶
This talks about response times and throughput which are critical when determining what you are trying to achieve in terms of performance testing. Understanding what load profile you are trying to achieve forms the basis for all your testing and ensures that you don’t place too high a load on your system which can lead to uncovering performance issues that will never exist in production. This can cause time to be spent trying to solve an issues that will never materialise.
ChatGPT goes on to state that resource utilisation and scalability should also be considered which again is important to consider. You should perhaps also consider response times in here as well as ensuring they are testable.
Identify Test Environment¶
Very important point, in order to accurately ensure your response times under load are met you need to test on an indicative environment.
We could have also considered data volumes in the database as this can affect performance is the SQL has missing indexes or is not efficient.
Choose Performance Testing Tools¶
ChatGPT has picked the most popular tools available, depending on your organisation you should consider open-source tools like JMeter in the first instance as licensing costs for commercial tools can be expensive.
Create Realistic Scenarios¶
This is a really important point and one that can be overlooked.
Determining your scenarios and their respective load profiles and patterns allows you to ensure that you stress your application under test in different ways. It also has the added advantage of being able to leverage the experience of business users who understand in fine detail the business processes. They can assist you in not only defining these scenarios but give you guidance on the volumes and levels of concurrency.
This is one of the most important parts of defining performance testing.
Determine Load Levels¶
This has picked up on the fact that getting your load correct is very important, this is closely associated with the first section on setting goals and the above section on realistic scenarios. Load profiles and getting these correct cannot be overstated.
Execute Load Tests¶
This is not a great deal of detail in this section. There are later sections on Scalability and Stress testing which really would have been better included in here.
You need to consider Peak Hour Loads and Soak Testing amongst others where you can define how your test volumes and concurrency change to satisfy the variety of load tests you are going to execute.
Monitor System Resources¶
This information needs to compliment your response times as system utilisation under load needs to be understood.
This is linked to Section 2 “Identify Test Environment” as this information is only relevant assuming you are testing on an environment scaled as production.
This discusses trends and anomalies which is a good start, it is also important to consider percentiles and averages in your response times as you are going to be dealing with large quantities of data.
You should also be considering transaction rates and concurrency to compliment your results as that gives you a way to tie your results back to your requirements.
Isolate and Reproduce Issues¶
This is probably one that I would disagree with, you should have enough metrics and tooling in your test environments to determine root cause of performance issues.
In addition to that many performance issues are only reproducible under load and therefore can only be analysed as part of a performance test which needs to take place in a performance environment.
Tune and Retest¶
Making your tests easy to execute and self-contained allows you to re-execute time after time.
Make this one of your design goals as if you need, for example, complex data setup or database truncation between test executions then you are introducing delays between fixes that your programme may not be able to absorb. Its is highly likely that you will be required to execute tests on a regular basis especially if working in an Agile programme so considering how you will do this up front is a good exercise.
As stated above in the ‘Execute Load Tests’ section this should be a part of that.
A scalability tests is just one of the tests you will need to run and should not be considered a separate entity to the other tests.
This is the same as the Scalability Testing above, it should exist in the ‘Execute Load Tests‘ section as again this is another test that you should be running as part of your performance testing.
Whilst it is great that this has been considered I do feel that Security Testing is a discipline of its own and is a very specialised activity.
This should be a separate piece of work that sits outside of performance testing as not only, as already mentioned, does it require skills that are not those normally found in a performance tester but in the case of Penetration Testing for example required the use of outside organisations.
This provides a sensible set of uses for documentation however for the most part I do not think that documentation forms the basis of collaboration with development and operational teams especially in an Agile environment, maybe in a waterfall model this may be a more relevant statement.
More often than not development teams would rather see the test executing and monitor for performance issues in real time or as part of Test Driven Development.
Clearly Artificial Intelligence will play a key role in the future of technology and Quality Assurance is no exception. It's all about how we leverage this and use it in performance testing.
I think it's fair to say that ChatGPT does a pretty good job of helping anyone define what a performance test should look like and things to consider. Clearly, we asked a very simple question and got a very sensible and well-rounded answer, we could have course expanded on each section and asked ChatGPT for more information on each to get a much more detailed response.
You will clearly at present need a performance test resource to build and execute tests and to determine if your requirements have been met and the implications of load on your application, but Artificial Intelligence can help you understand if there is anything obvious that you may have missed in your definition of performance testing and will provide you with a reasonable starting point for your performance testing activity.