Simulating parallel requests to multiple API endpoints with JMeter

The objective of this performance test is to find out how many concurrent requests the server can handle without going below its target throughput or rps

Nabila Siregar
4 min readApr 21, 2022

I was given a task to simulate a scenario where a number of users are sending requests to three API endpoints concurrently. Let’s say I have endpoint Clocks with 50 rps target and endpoint Schedules with 80 rps target. Both endpoints are possible to have requests per second (rps) value much higher than the given target, but I need to find out the maximum concurrent requests the server can handle without going below each of its rps target and for how long.

The most intuitive thing I could think of was using Constant Throughput Timer to control the number of requests sent, but it was not doing what I’m looking for — I don’t want to limit the number of requests to achieve the desired rps, because it does not matter for my case if the throughput value is higher than the target rps.

I was looking for another solution and I came across a plugin from BlazeMeter called Parallel Controller but then again, this plugin was not doing what I’m looking for. For debugging purposes, I’m only using 10 threads with 0 ramp up, infinite loop, and 30s duration. The parallel controller does help sending http request for both endpoints concurrently as seen below.

20 requests sent simultaneously in the first loop, another 20 requests in the second loop, and repeated until 30 seconds is up. But when I’m using the parallel controller, the number of active threads never exceed 2 virtual users, as seen in the Active Threads Over Time listener below.

In the summary report, the throughput value for endpoint Clocks is also similar with endpoint Schedules, despite endpoint Schedules has a shorter response time compared to endpoint Clocks. This is because the parallel controller will not start another loop until all the parallel threads have completed their work. This means the controller will hold sending any new request to endpoint Schedules even when the previous requests have finished, resulting in lower rps than expected for endpoint Schedules.

The solution I came up with for my case was using Synchronizing Timer and place it inside the HTTP sampler and place each HTTP sampler under a different thread.

Synchronizing Timer will hold the thread release until all 20 threads (10 threads from endpoint Clocks + 10 threads from endpoint Schedules) have arrived and then released all at once. All created threads will also be hold active during those 30 seconds.

Placing the HTTP sampler under different threads is to send the API requests in parallel, because JMeter by default will run threads simultaneously when “Run Thread Groups consecutively” under the test plan is unchecked.

For the first loop, parallel requests for both threads will be released, as seen in the View Results table below.

Parallel requests that were sent to endpoint Schedules are finished (sample 1–10) before parallel requests to endpoint Clocks did (sample 11–19 and 21) and Schedules thread started sending new request (sample 20) to the server. This is what I’m looking for since I don’t want to wait for all threads to complete their requests to start another loop.

In the summary report, we can also see that Schedules thread create more samples than Clocks thread hence producing a higher throughput value.

To conduct the test, increase the number of threads and duration until you have reached the optimum rps for all endpoints with 0% error.

--

--

Nabila Siregar
Nabila Siregar

Written by Nabila Siregar

MSc Computational Science student @ UvA. Deeply interested in software engineering, machine learning, and causality.