Welcome to OctoPerf's blog, where we embark on a journey through the dynamic landscape of load testing, a crucial aspect of ensuring your applications thrive in the face of user demands.
Whether you're a seasoned performance engineer or just diving into the world of load testing, our blog is your go-to resource for insights, best practices, and cutting-edge tools.
Whether you're a developer aiming to optimize your code, a QA engineer ensuring application resilience, or a business owner safeguarding user satisfaction, load testing is your ally. Join us on this expedition into the heart of performance excellence.
Are you ready to transform your understanding of load testing? Let the journey begin!
In this post we are going to look at how you can spot trends in your performance test results and use this trend analysis to help shape the way you address performance testing.
Performance testing can generate a large volume of performance test data and using this data to define your future performance testing coverage, scope and volumes can really add benefit to your non-functional testing process.
To use data to support performance testing you are going to need to store it in a way that makes it accessible and comparable. The ideal solution and one we will discuss in this post is to store this data in a database.
This post will look at the positive benefits of storing lots of performance data, especially data that spans a significant period of time, and we will look at ways of using this data to define performance test coverage and identify areas of performance concern.
Version 15 of OctoPerf bring a huge focus on continuous integration and Devops practices. We've taken steps to highlight existing integrations, add many others and we've thrown an entirely new report type on top of it.
It's always been possible to automate OctoPerf tests using our existing plugins or simply through our API. That said the setup of those integrations comes with a learning curve. In order to make this curve as flat as possible we've added a new wizard and the possibility to get a PDF report automatically at the end of the test.
If you'd like to carry out load tests in a simple way, benefit from a simplified configuration with a focus on writing your test plan and its test typology, have the opportunity to monitor through detailed dashboards, store your metrics, and also mock one or several services: you are at the right place!
With this kind of configuration you will be able to make shift-left performance testing as well!
docker-compose is a small library that allows you to run docker-compose This is useful to bootstrap test environments.
Docker Compose offers a multitude of benefits which I'll detail below:
Simplified configuration: Docker Compose lets you define and manage all the services of a multi-container application in a single YAML file. This makes it easy to configure, start and stop all the containers in an application.
Automated deployment: With a single configuration file, you can automate the deployment of all the services required for your application, reducing manual errors and improving consistency between development, test and production environments.
Managing dependencies : Compose makes it easy to manage dependencies between services. You can define the startup order of containers and the links between them, ensuring that all services start up in the right order and are properly connected.
Portability: Once you've defined your Compose file, you can easily share it and run it on different machines. This ensures that developers and operational teams work in identical environments, reducing compatibility problems.
Service isolation: Docker Compose creates isolated networks for containers, ensuring that each service operates in a partitioned environment. This improves security and enables services to be tested without mutual interference.
Scalability: Compose makes it easy to scale services. You can quickly adjust the number of containers for a particular service by simply modifying the configuration file and redeploying.
Local development and easy testing: Developers can use Docker Compose to create local development environments that faithfully reproduce production environments. This enables problems to be detected and resolved early in the development cycle.
CI/CD integration: Docker Compose integrates well with continuous integration and deployment (CI/CD) pipelines. You can use Compose files to orchestrate automatic tests and deployments in your CI/CD workflows.
Simplified maintenance: With Docker Compose, updating configurations and services becomes simpler. You can update container images or modify configurations by modifying the Compose file and redeploying services.
In this blog post we are going to look at using JMeter to support business testing in production.
This is a slightly different topic to the one discussed in this post on testing in production.
The one in the link above is around running your performance testing in production for reasons that are discussed in that post.
This post is going to focus on how you can leverage your performance testing tools to support this activity, as discussed above we will focus on JMeter in some of the examples.
But any load testing tool, or even functional testing tool, can provide the same benefits.
In this post we are going to look at the importance of volume testing. We are going to consider how this type of non-functional test can have a significant impact on your size and scale your production environment based on evidence this test provides.
We are going to look at some real-world examples of how getting your volume testing correct will ensure that your environments are not oversized or undersized. Both these scenarios have a cost which is both financial and potentially reputational.
Volume testing belongs to the group of non-functional tests, which are a group of tests often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data to assert the system performance with a certain amount of data in the database. Volume testing is regarded by some as a type of capacity testing, and is often deemed necessary as other types of tests normally don't use large amounts of data, but rather typically use small amounts of data.
We use this test to understand the impact of large data volumes on your application under test.
Larger data volumes in the database can:
Increase the query time of your SQL statement which impact response times,
Impact the amount of CPU or memory your database uses and therefore you can use this test to size this component,
Impact the size of responses to end user and API requests, with search requests being an example, this impacts application CPU and Memory.
A volume test looks to not only understand performance of the SQL with an indicative sized database. But also looks to understand how much CPU and Memory is required across you full stack to support peak volumes of load and concurrency.
The results you gather from this type of non-functional testing is important in determining how you application needs to be sized in production to support future data volumes.