A simple TCP server using select (), why is the "long request" so high?

I use a simple TCP server using the select () method - everything is fine and the performance is quite acceptable, but when benchmarking with ab (apachebench) the “longest request” is insanely high compared to the average time:

I use: ab -n 5000 -c 20 http://localhost:8000/

snippet:

Requests per second:    4262.49 [#/sec] (mean)
Time per request:       4.692 [ms] (mean)
Time per request:       0.235 [ms] (mean, across all concurrent requests)

Percentage of the requests served within a certain time (ms)
  50%      2
  66%      2
  75%      2
  80%      2
  90%      2
  95%      3
  98%      3
  99%      4
 100%    203 (longest request)

and the same against apache:

Requests per second:    5452.66 [#/sec] (mean)
Time per request:       1.834 [ms] (mean)
Time per request:       0.183 [ms] (mean, across all concurrent requests)

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      2
  75%      2
  80%      2
  90%      3
  95%      3
  98%      4
  99%      4
 100%      8 (longest request)

For reference, I use stream_select, and sockets are not blocked.

Is this a general effect of using the select () call?
Are there any performance considerations I should worry about?

Update:

concurrency <= 6 "" ( 2x 3 ), , 6, (, 7 , 20 200 ).

Update2:

/ - PHP-.

+3
3

200 .

, NULL - ? , , ? fd, select ? - ...

, , localhost. , , , - TCP (200 - - TCP Linux).

+1

wirehark tcp-ip. , , ( , ..).

+1

Since 99% of your queries complete in just 4 ms, this will tend to imply a one-time cost, such as searching in the DNS or replacing a large amount of your code from disk.

0
source

Source: https://habr.com/ru/post/1718603/


All Articles