You have built your performance test and executed the tests under load and your tests do not meet your requirements in terms of response times.
Or you are unable to execute your tests with the number of concurrent users required.
In this post we will give some insights into where you might want to start looking for the root cause of your performance issues.
These insights are very high-level, and the architecture of your application will determine which ones are of use and which are not as will the language your application is written in and the database technology it uses.
Performance testing applications requires a set of skill that are build and gathered over many years of studying and using the various techniques and tools that are required to make sure the application you are testing is fit for production.
Now we have all heard of Artificial Intelligence (AI) and the many tools and companies that now exist in the AI space.
Based on a quick look on the internet there are around 15,000 AI startups in the United States alone.
So surely with all that technology at our disposal we should be able to use these AI tools to define our performance tests meaning that anyone can determine what performance testing should take place regardless of experience and training.
To build, execute and analyse these tests still requires a competent performance tester but the definition of what should be done could be handed over to Artificial Intelligence right?
Let’s find out shall we, we will use ChatGPT as this commonly available and probably the one most have heard of.
Agile development teams generally follow the principles of Scrum where individual teams work together to manage their
workload through a set of values, principles, and practises.
From a development perspective this gives a team which comprises a Product Owner, Scrum Master, and Development team the
autonomy to work and deliver in an environment that suits their needs and helps them develop change for the organisation
in a way that maximises efficiencies.
This blog post is not an overview of the scrum methodology but will require some understanding of the processes that
take place, and these will be discussed throughout his post.
What we are trying to do is understand how in an Agile delivery framework we can make sure that performance testing is
not lost or overlooked. Scrum teams work in short sprints that means that your performance testing must, like the unit
testing built by the development teams, be lightweight and, well... agile, for want of a better word.
Everywhere you look on social media its DevOps, Agile Methodologies, Continuous Integration, Continuous Delivery. You could be forgiven for believing that most organisations and programmes follow these principles.
This is not true.
Many companies use a Waterfall model which is also known as a linear-sequential life cycle model. In a waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases. The Waterfall model is the earliest SDLC approach that was used for software development.
The waterfall Model illustrates the software development process in a linear sequential flow. This means that any phase in the development process begins only if the previous phase is complete. In this waterfall model, the phases do not overlap.
It is difficult to determine a percentage for the number of organisations that follow this model but its high. Probably more than half the number of software programmes follow this approach. Many companies prefer it, many companies still need to follow it.
This is due to the way that stakeholders manage the development and release of features. There are many organisations that due to regulatory reasons or compliance need to follow this way of developing and releasing software.
Many of the posts we publish focus on ways that performance testing fits into Continuous Integration and Continuous Delivery. We know that as the Waterfall model is not going to disappear any time soon so it’s time to look at how you could build performance testing for a Waterfall model. It is not correct to say that Waterfall is the way software was developed. Or Continuous Integration is the way that software should be developed. It is down to the individual organisation and the client the software is being developed for.
In this blog post we are going to look at some of the uncommon performance tests. By this we mean those scenarios that are not what we believe are commonly executed but those that are run periodically at best.
These uncommon scenarios should not necessarily take priority over the more common performance scenarios. They do add value by stressing parts of your application under test that may be missed by the more conventional tests.
We will discuss each scenario in turn and look at the benefits and some of the difficulties you may experience in designing these scenarios. We will also take time to give examples of when these scenarios would be useful.