Locust.io: control query parameter per second

I am trying to load a test server of my API using Locust.io in EC2 to calculate optimized instances. It provides an easy-to-configure option to set the timeout for a sequential request and the number of concurrent users . Theoretically, rps = X #_users timeout. However, during testing, this rule breaks for very low #_users thresholds (in my experiment, about 1200 users). The hatch_rate, #_of_slaves variables, including those in the distributed test setup, practically do not affect rps .

Experimental information

Testing performed on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPU, with a shared SSD and 30 GB of RAM. During the test, the processor load reached a maximum of 60% (depending on the speed of hatching, which controls the simultaneous processes generated), remaining on average below 30%.

Locust.io

setup: uses pyzmq and installs vCPU as a slave with each core. A single POST request setup with a request body of ~ 20 bytes and a response body of ~ 25 bytes. Request failure rate: <1%, with an average response time of 6 ms.

: time between consecutive requests set to 450 ms (min: 100 ms and max: 1000 ms), hatching speed at a convenient 30 per second and RPS , measured by changing #_users.

Locust.io throughput graph

RPS , 1000 . #_users , 1200 . #_users , RPS. 32 ( c3.8x) 56 ( ) RPS.

, RPS? - , ?

+4
1

( )

-, RPS? , Locust, ( ). Locust : ?

, RPS, "", RPS.

, , ? , , ? , , .

, , , , . , CPU. , ​​? , , Linux, ?

+4

Source: https://habr.com/ru/post/1569478/


All Articles