Skip to content

Load-Testing

Performance Testing in a Scrum Framework

Agile development teams generally follow the principles of Scrum where individual teams work together to manage their workload through a set of values, principles, and practises.

From a development perspective this gives a team which comprises a Product Owner, Scrum Master, and Development team the autonomy to work and deliver in an environment that suits their needs and helps them develop change for the organisation in a way that maximises efficiencies.

This blog post is not an overview of the scrum methodology but will require some understanding of the processes that take place, and these will be discussed throughout his post. What we are trying to do is understand how in an Agile delivery framework we can make sure that performance testing is not lost or overlooked. Scrum teams work in short sprints that means that your performance testing must, like the unit testing built by the development teams, be lightweight and, well... agile, for want of a better word.

Performance Testing in a Waterfall Model

Everywhere you look on social media its DevOps, Agile Methodologies, Continuous Integration, Continuous Delivery. You could be forgiven for believing that most organisations and programmes follow these principles.

This is not true.

Many companies use a Waterfall model which is also known as a linear-sequential life cycle model. In a waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases. The Waterfall model is the earliest SDLC approach that was used for software development.

The waterfall Model illustrates the software development process in a linear sequential flow. This means that any phase in the development process begins only if the previous phase is complete. In this waterfall model, the phases do not overlap.

It is difficult to determine a percentage for the number of organisations that follow this model but its high. Probably more than half the number of software programmes follow this approach. Many companies prefer it, many companies still need to follow it.

This is due to the way that stakeholders manage the development and release of features. There are many organisations that due to regulatory reasons or compliance need to follow this way of developing and releasing software.

Many of the posts we publish focus on ways that performance testing fits into Continuous Integration and Continuous Delivery. We know that as the Waterfall model is not going to disappear any time soon so it’s time to look at how you could build performance testing for a Waterfall model. It is not correct to say that Waterfall is the way software was developed. Or Continuous Integration is the way that software should be developed. It is down to the individual organisation and the client the software is being developed for.

Uncommon Performance Testing

In this blog post we are going to look at some of the uncommon performance tests. By this we mean those scenarios that are not what we believe are commonly executed but those that are run periodically at best.

These uncommon scenarios should not necessarily take priority over the more common performance scenarios. They do add value by stressing parts of your application under test that may be missed by the more conventional tests.

We will discuss each scenario in turn and look at the benefits and some of the difficulties you may experience in designing these scenarios. We will also take time to give examples of when these scenarios would be useful.

Performance Response Times

When performance testing you need a set of requirements to measure your response times against. When defining these you should do so with your end users or business teams.

It is relatively easy to predict volumes, load and users that will use your application as you will no doubt have some data based on your current systems. It is a lot harder to agree on what the response times of your application should be. Without this critical measure we really cannot say for certain that an application or service performs. The reason being that we are not measuring the response time under load against a value that has been agreed upfront.

We need to be able to answer these questions:

  • At what point does a response time become unacceptable?

  • How can response times be defined in the requirements gathering stage?

  • How can we ensure there is a measure to test against?

We are going to look in this post at how we might categorise response times based on the interaction model with the end-user. And use these metrics to build response time requirements as part of your wider non-functional requirements.

Performance Testing in Production

In this Blog Post we are going to discuss performance testing in production.

Now before you think we have gone mad and lost our minds completely this is not as crazy as it sounds.

Production is an environment that:

  • Is sized accurately,
  • Has a suitable diversity of data,
  • Has the correct data volumes,
  • Is on the correct infrastructure.

All the things you spend a significant amount of time getting right in your performance testing environment and that can be difficult to achieve.