We've recently been working with folks at Inflectra to develop an integration between Spira and OctoPerf. If you don't know about Inflectra and Spira they offer a very cool test management solution (among other things), you should check it out.
In this blog post we will highlight all the steps to follow in order to setup this integration. This way you'll be able to see the benefits of working with both tools in your organization.
We are going to discuss how we can test the Lightweight Directory Access Protocol (LDAP) using JMeter, the principles of LDAP can be quite complicated as their origins come from the X500 specification which documents a suite of protocols developed by the International Telecommunication Union in the 1980’s.
It is likely that you have heard of LDAP as the directory protocol used to underpin Active Directory (AD) which is a directory service for Microsoft and normally is used by many organisations to support user authentication and role profiles for company networks.
It is important to understand that LDAP is not exclusive to Microsoft but does allow users to query AD and authenticate access to it.
In order to demonstrate how we can test LDAP using JMeter we are going to use an Online Test Server provided by Forumsystems which means that should you wish to follow this tutorial but don’t have access to a LDAP server you can. If you need to look at the solution, the JMeter project can be found here.
A limitation of using this online LDAP test server is that we only have read-only access meaning we can only test the following functions:
Bind
Unbind
Search
Compare
But JMeter also offers the ability to:
Rename
Add
Delete
Modify
The LDAP entries which we will be unable to demonstrate using this service but nonetheless we will provide some examples of these test types.
In this post we are going to look at performance testing on large scale programmes.
A few the posts we write define techniques and approaches based on a single application under test but sometimes you are faced with the prospect of performance testing:
A new solution that replaces several legacy applications,
A service migration from once cloud provider to another or one data center to another,
An infrastructure update that covers multiple applications or services,
A new solution that compliments and integrates with existing software.
Now, especially in the case of migration of services, performance is key, and you cannot afford to see a degradation in performance as the business users will have already become accustomed to the software and how it performs.
You can look to make it perform better but it is unlikely they will tolerate poorer performance just because you have migrated from one platform to another.
Equally, new solutions that replace legacy application will (rightly or not) be expected to perform better than their predecessor which is a challenge as your new solution will undoubtedly have a different workflow and approach to delivering what the end-users want.
These types of large-scale programmes can on the surface seem complex from a Quality Assurance perspective and we have put together this guide to help you understand some of the techniques you can use to ensure that the performance testing aspect of the testing is manageable and not overwhelming. We have set out in the sections below things to consider to assist in the performance testing of large-scale programmes of work.
When using a testing tool, it is only logical to trust its results. And the more well-known the tool is, the more trust we put in it. Furthermore, how could we know it is wrong ? After all, who is in a position to judge the judge ?
This phenomenon is particularly true in the load testing community, since the field is still something of a niche among the testing world. Finding deep-dive studies about the actual technical aspect of load testing is difficult.
Those observations led to the creation of this study. In this article, I will compare the results obtained for the exact same load test using 4 different open-source load testing tools: JMeter, Locust, Gatling and K6.
These softwares were chosen because they are among the most used and/or discussed in the community, but in the future the goal will be to add others, including the ones that are not open-source.
The goal of this comparison is not to point any fingers and decide which tool is right or wrong. The objective is to try to understand what we measure within each tool, and what it means for our performance tests.
Swagger in conjunction with OpenAPI is a way for REST API’s to be build, documented and consumed. It is defined in either YAML or JSON. OpenAPI and Swagger are both open source, and their use is commonplace amongst development teams and can make the life of a performance tester a lot easier as it allows us to build JMeter tests directly from the definition of the endpoints.
The object of this post is to look at how we can produce JMeter tests directly from a Swagger definition. If you are unfamiliar with Swagger it is worth spending some time reading the online overview as it will provide a good understanding of how the specification works.
Now clearly in order for you to leverage the benefits of building performance tests from Swagger definitions you are going to need to have your development teams use it to describe the API contract for service they are producing, but for the purposes of this post we will use a test definition called Swagger Petstore.