Skip to content
Load testing without think times?

Load testing without think times?

There are a few key parameters that you must control in order to launch relevant tests. I believe think time is one of those, and it is so often overlooked that I would like to take some time to highlight what it stands for. Any quick search on your favorite search engine will tell you that think times in load testing are meant to reproduce human interaction time. As a load testing script is usually composed of a list of requests, it's easy to understand how replaying them with think time is one more step toward a realistic behavior. But this doesn't mean it's easy to understand how to use and configure it.

Need professional services?
Rely on our Expertise

Where

It is also important to consider where to put think times. You will find different approaches here:

On every request

Using think times at request level can be a problem since the human interaction does not occur on every HTTP request, in particular if you consider resources on a page. There might be dozens of resources associated with a single user action and their number might change from one test to another depending on the application. Which means your think times are not consistent from one run to another and we should avoid this at all costs.

At the end of the script

Using think time at the end of the script might seem a good solution but this will concentrate all the load on the beginning of every script. We will see later that it does not serve the required purpose in terms on concurrent sessions as well because the session is not maintained for a realistic duration.

At page/transaction level

So instead, I would recommend to use think time at page/transaction level. That way you can have a think time corresponding to the interaction with each page. For instance, it will usually be longer on a login or register page whereas browsing through the application is done faster.

Think times to change the load

More than often think times are used as a quick way to generate more load. And it makes sense, just changing one parameter allows you to quickly generate more load to your server and it can lower your license/infrastructure costs, so why not?

Well if take the following example of someone that wants to run 2 tests:

Case 1 : 10000 users with 30 sec of think time

Case 2 : 2000 users with 6 sec of think time

At first glance, both options look similar but to make sure we need to include the response times in the equation.

The formula would look like this if we assume 1 page = 1 request:

Requests/second = Users / (response time + think time)

Of course in reality the amount of resources on every page can vary and will add a layer of complexity on top of it, but since we assume we will be running the same script in both situations it won't make a difference here.

With a 3 sec response time as average on every request, we have:

Case 1 : 10000/33 = 303 hits/s

Case 2 : 2000/9 = 222 hits/s

As you can see once we factor the response time in the equation we can see the load generated is going to be quite different.

Connection pools

Pools

From the above example, if you do the math, you could think that 3.6 seconds of think time will allow you to run the same number of hits/s:

2000 / (3.6 + 3) = 303 hits/s

This suggest that the 10000 users test can safely be replaced with the 2000 one with less think time. Problem is this is only true when testing static traffic or when no session is maintained.

In reality the 10000 users test is more likely to help find bottlenecks. To understand this better we must have a look at pools and queues on server side.

Case 1 will generate 10000 concurrent connections on the web servers whereas Case 2 will generate 2000 concurrent connections on the web servers. The same proportion of connections will go to the application server and database connection pools. Connexion pools usually have a maximum size and even if you do not hit that limit, every session created will consume memory on the server.

So even though the hits/sec remain the same Case 1 is more likely to hit a limit. If the number of real users can go up to 10000 on the application, it should be mandatory to test with Case 1 at least once during the test campaign.

How much think time

Now we've seen why think times are so important but we still have no idea of the value to use as think times. Here again you will find several schools of thought, some testers will prefer using a constant think time on every page. In this case the think times are easy to compute :

Think time per transaction = (Total expected time - Total response time) / Number of transactions

You just need to have an estimate of the Total expected time which corresponds to how long a real user takes to do the same as your script. With this formula you can use Think times to literally fill the blanks and make sure the activity will be linear over time. This method is quite easy to setup and that makes it very efficient. But it is not as realistic as using an appropriate time for every transaction. A login page and a logout link will obviously not require the same interaction time and your script must respect that.

This is why it is even better to use the application logs to understand how much time is spent on every page by real users. Although it can seem to be quite time consuming, it will make your test results more accurate and is often worth the investment.

Also whatever the value you come up with, it is essential to introduce some randomness in your think times. Otherwise the test might be too "robotic", add a low rampup of users on top of that and you might end up with transactions that are so smoothly distributed that the server can handle them easily. But reality is chaotic, there are many factors you can't reproduce during a test that come into play. I would recommend to use a ± 10 % on top of the think times you had in mind to keep it chaotic.

Conclusion

I hope you found this article useful and that the next time you're tempted to overlook think times you will think twice about it. Keep in mind that the realism of the tests, the ability to get consistent results across several runs, will have a large impact on the quality of your tests.

Want to become a super load tester?
Request a Demo