OctoPerf 11.7 focuses on more realistic load control, deeper monitoring, and stronger observability integrations.
New pacing options make it easier to model user behavior beyond simple concurrency.
Load agent monitoring now exposes JVM-level metrics to better detect bottlenecks.
Dynatrace integration has been refined for clearer correlation between load tests and APM data.
Reporting and runtime controls are improved to simplify comparisons and execution tuning.
OctoPerf 11.6 focuses on usability and analysis quality with automatic SLAs applied to all default reports.
JMeter is updated to 5.2.1 with safer cache behavior enabled by default.
Correlation rules can now be reused across projects and workspaces, simplifying script maintenance.
Enterprise administration gains stronger workspace visibility and control.
Reporting improves with clearer SLA indicators and a test configuration summary for easier analysis.
OctoPerf 11 introduces true modular design to build reusable test components and simplify script maintenance.
Tags make it easier to organize projects, virtual users, runtimes, and results across iterations.
New percentage-based metrics improve visibility on error and success rates during test execution.
Administration is strengthened with better user management and shared providers across workspaces.
These changes lay the groundwork for more flexible design and deeper analysis in future releases.
OctoPerf 10.6 strengthens platform integrations with native Microsoft Azure support for on-demand load generators.
GitLab CI integration enables automated test execution through standard CI pipelines.
OAuth2 and OpenID Connect expand enterprise authentication options alongside existing LDAP support.
Error reporting is clearer during design, runtime, and analysis, making troubleshooting faster.
Additional reliability fixes improve test startup stability and monitoring alert visibility.
This is Chapter 5 of our journey of non-functional testing with Otto Perf. This is the last chapter in Otto’s journey of performance testing a new application for OctoCar.
In Chapter 4 Otto was looking at ways to check the impact of his performance tests on the infrastructure that the application was running on.
He found out that the application had been instrumented using Dynatrace and he could use this to analyse all aspects of the application architecture.
He also discovered that with the use of a custom JMeter header in his test scripts he could easily identify the transactions that were generated as part of his performance testing.
The ability to create custom metrics based on his tests and to produce dashboards meant that Otto was in a very good place when it came to monitoring.
The introduction of application monitoring and a subsequent increase in server capacity uncovered an issue with Otto’s reporting process.
He had missed the fact that the application was regressing over time but because the transactions being measured were still withing their agreed non-functional requirement tolerances he did not notice this.
He did some work to ensure that he was tracking trends across his results to ensure that transaction regression would be picked up in the future. This trend analysis also offered the ability. To run tests at different times of the day and under alternative load profiles and determine the differences.
In this Chapter we will see development finish and Otto will discover that the end of a programme is not the end of performance testing and learn about the benefits that the creation of a regression test can bring.
Otto will also find alternative uses for his performance tests outside of their primary purpose of generating load and concurrency and will start to understand that the assets you create when building performance tests can provide benefit to many other IT and Non-IT related activities.