Nodejs http with redis, only 6000req / s

Node_redis benchmark test, it shows that incr has more than 100000 op / s

$ node multi_bench.js Client count: 5, node version: 0.10.15, server version: 2.6.4, parser: hiredis INCR, 1/5 min/max/avg/p95: 0/ 2/ 0.06/ 1.00 1233ms total, 16220.60 ops/sec INCR, 50/5 min/max/avg/p95: 0/ 4/ 1.61/ 3.00 648ms total, 30864.20 ops/sec INCR, 200/5 min/max/avg/p95: 0/ 14/ 5.28/ 9.00 529ms total, 37807.18 ops/sec INCR, 20000/5 min/max/avg/p95: 42/ 508/ 302.22/ 467.00 519ms total, 38535.65 ops/sec 

Then I add redis to nodejs with http server

 var http = require("http"), server,    redis_client = require("redis").createClient(); server = http.createServer(function (request, response) {    response.writeHead(200, {        "Content-Type": "text/plain"    });       redis_client.incr("requests", function (err, reply) {            response.write(reply+'\n');        response.end();    }); }).listen(6666); server.on('error', function(err){ console.log(err); process.exit(1); }); 

Use ab command for testing, it has only 6000 req / s

 $ ab -n 10000 -c 100 localhost:6666/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests Completed 10000 requests Finished 10000 requests Server Software: Server Hostname: localhost Server Port: 6666 Document Path: / Document Length: 7 bytes Concurrency Level: 100 Time taken for tests: 1.667 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Total transferred: 1080000 bytes HTML transferred: 70000 bytes Requests per second: 6000.38 [#/sec] (mean) Time per request: 16.666 [ms] (mean) Time per request: 0.167 [ms] (mean, across all concurrent requests) Transfer rate: 632.85 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.3 0 2 Processing: 12 16 3.2 15 37 Waiting: 12 16 3.1 15 37 Total: 13 17 3.2 16 37 Percentage of the requests served within a certain time (ms) 50% 16 66% 16 75% 16 80% 17 90% 20 95% 23 98% 28 99% 34 100% 37 (longest request) 

The latest hello world test, it reached 7k req / s

 Requests per second: 7201.18 [#/sec] (mean) 

How to profile and find out the reason why redis in http loses performance?

+4
source share
1 answer

I think you misinterpreted the result of the multi_bench test.

Firstly, this test extends the load to 5 connections, while you only have one in your node.js. More connections mean more communication buffers (distributed on a socket basis) and better performance.

Then, although the Redis server can support 100K op / s (assuming you open multiple connections and / or use pipelining), node.js and node_redis cannot reach this level. The result of your multi_bench run shows that when pipelining is not used, only 16K op / s is achieved.

 Client count: 5, node version: 0.10.15, server version: 2.6.4, parser: hiredis INCR, 1/5 min/max/avg/p95: 0/ 2/ 0.06/ 1.00 1233ms total, 16220.60 ops/sec 

This result means that without pipelining and with 5 parallel connections, node_redis can handle 16,000 operations around the world. Note that measuring the throughput of 16 kO / s when sending 20,000 operations (the default value for multi_bench) is not very accurate. You should increase num_requests for better accuracy.

The result of your second test is not so surprising: you add the http level (which is more expensive to analyze than the Redis protocol itself), use only 1 connection to Redis, and ab tries to open 100 simultaneous connections to node.js and finally get 6K op / s, which results in a bandwidth of 1.2K op / s compared to the Hello World HTTP server. What did you expect?

You can try to squeeze a little more performance using the node.js clustering capabilities as described in this answer .

+11
source

Source: https://habr.com/ru/post/1494408/


All Articles