I use a simple TCP server using the select () method - everything is fine and the performance is quite acceptable, but when benchmarking with ab (apachebench) the “longest request” is insanely high compared to the average time:
I use: ab -n 5000 -c 20 http://localhost:8000/
snippet:
Requests per second: 4262.49 [#/sec] (mean)
Time per request: 4.692 [ms] (mean)
Time per request: 0.235 [ms] (mean, across all concurrent requests)
Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 2
95% 3
98% 3
99% 4
100% 203 (longest request)
and the same against apache:
Requests per second: 5452.66 [#/sec] (mean)
Time per request: 1.834 [ms] (mean)
Time per request: 0.183 [ms] (mean, across all concurrent requests)
Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 3
95% 3
98% 4
99% 4
100% 8 (longest request)
For reference, I use stream_select, and sockets are not blocked.
Is this a general effect of using the select () call?
Are there any performance considerations I should worry about?
Update:
concurrency <= 6 "" ( 2x 3 ), , 6, (, 7 , 20 200 ).
Update2:
/ - PHP-.