Skip to content

2022

OctoPerf v12.8 - Datadog, Json Path and sub samples

It's been a while since the last update post in july 2021, not that we haven't updated OctoPerf since then but the additions we've made are not easy to share in a blog. Allow me to take an example.

JMeter import

The thing that kept us the most busy over the years is finding the perfect way to import a JMeter project into our data model. We need to do this in order to:

  • Allow you to manipulate your virtual users in our interface even when they come from JMeter,
  • Execute each Threadgroup/Virtual user in a separate docker container to make our runtime resilient and scalable (this allows us to predict resource consumption better and allocate machines accordingly),
  • Offer a configurable report with filters instead of a static HTML.

But we must make sure to maintain the same behavior for all functionalities. And at the same time we must avoid negative impacts on our non-JMeter users (like added UI complexity, or regressions). That seems simple enough at first glance but it gets harder when you consider that JMeter allows you to put any configuration anywhere but with a different scope. For instance you can have header managers configured this way:

header-manager

Once imported in OctoPerf each Threadgroup will be a distinct Virtual User, and we need to consider carefully what to do with these headers, we must find a way to preserve the same behavior in OctoPerf.

Chaotic Performance Tests with JMeter

Building performance tests that conform to a very specific level of load and concurrency is a standard approach to performance testing.

You determine your peak levels of load and concurrency, and you build a test that meets this.

You build soak test and scalability tests that conform to pre-determined levels of load and concurrency, and you execute these alongside the other scenarios you build to meet your performance requirements.

This is the correct approach and conforms to most organisations approach to performance testing and something you should always do; but in addition its also a good idea to build performance tests that change on each execution and put your application under a random ever changing load profile as it is not uncommon for application usage to change when new features are added or if migrating to a new solution therefore randomising your performance testing can provide you useful information on how your application will react to conditions that do not conform to your standard approach.

This blog post will look at how you can make your JMeter tests more chaotic using a variety of samplers, timers and a beanshell server.

Performance Testing GraphQL with JMeter

In this Blog Post we are going to look at the GraphQL HTTP Request sampler and look at how GraphQL requests can also be made using a HTTP Request sampler in case you are for some reason restricted to an earlier version of JMeter (the sampler was only introduced in JMeter 5.4).

We will also look at some of the principles of GraphQL.

If your application under test comprises of a GraphQL service, you are going to have to understand how to test it and some of the performance considerations that surround performance testing of GraphQL.

Pearson - Case study

Pearson, founded in the 19th century, is one of the world leaders in providing education services all around the globe.

Francisco Muniz is the Performance Architect for Pearson, responsible for Performance Alignment across Pearson's Virtual Learning. This position entails working with many different parts of the organization, such as Architecture, Development, and QA.

As such, Francisco was leading and overseeing the important Octoperf and Loadrunner software switching project.

The solution in place was LoadRunner, a legacy solution. This meant re-thinking the strategy in many areas, including performance testing.

Francisco Muniz
Francisco Muniz is the Principal Performance Architect for Pearson, responsible for Performance alignment across Pearson's Global organization.
As such, he was leading and overseeing that important software switching project.

The legacy solution in place was LoadRunner.

This meant re-thinking the strategy in many areas, including performance testing.