Nodejs http server cannot handle large response under high load

Tested on a high load server with approximately 500-600 requests per second. after several hours of debugging, I ended up with just a simple HTTP server.

I noticed that when the response body was larger, say 60k, I got this error:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit. Trace at Socket.EventEmitter.addListener (events.js:160:15) at Socket.Readable.on (_stream_readable.js:679:33) at Socket.EventEmitter.once (events.js:179:8) at TCP.onread (net.js:527:26) 

And after that, the processor acted like crazy

But with the same code, when the answer was set to 10k text, everything worked smoothly. weird ...

Has anyone come across this before ??? Recourse.

This is the full script:

 var cluster = require('cluster'), numCPUs = require('os').cpus().length; if(cluster.isMaster){ for (var i = 0; i < numCPUs; i++) cluster.fork(); cluster.on("exit", function(worker, code, signal) { cluster.fork(); }); } else{ var http = require('http'); var app = function(req, res){ res.writeHead(200, {'Content-Type': 'text/html', 'Access-Control-Allow-Origin': '*'}); res.end( 60k_of_text___or___10k_of_text ); }; http.createServer(app).listen(80); } 
+4
source share
1 answer

Now all lines are first converted to buffer instances. This can put a heavy strain on the garbage collector to clean up after each request. Launching the application with --prof and checking the v8.log file with tools/*-tick-processor , and you can see it.

Work is being done to fix this, so the lines are written to memory and then cleared when the request is complete. It was implemented to write the file system to f5e13ae , but not yet for other cases (much harder than that sounds).

Converting strings to buffers is also very expensive. Especially for utf8 strings (default). Where you can, definitely pre-cache the string as a buffer and use it. Here is an example script:

 var http = require('http'); var str = 'a'; for (var i = 0; i < 60000; i++) str += 'a'; //str = new Buffer(str, 'binary'); http.createServer(function(req, res) { res.writeHead(200, {'Content-Type': 'text/plain', 'Access-Control-Allow-Origin': '*'}); res.end(str); }).listen(8011, '127.0.0.1'); 

And here are the results of running wrk 'http://127.0.0.1:8011/' against the server, first passing str as a string, and then as a stored buffer:

 Running 10s test @ http://127.0.0.1:8011/ 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 0.00us 0.00us 0.00us -nan% Req/Sec 0.00 0.00 0.00 -nan% 8625 requests in 10.00s, 495.01MB read Requests/sec: 862.44 Transfer/sec: 49.50MB Running 10s test @ http://127.0.0.1:8011/ 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 624.07us 100.77us 4.45ms 99.17% Req/Sec 7.98k 729.82 9.00k 57.59% 158711 requests in 10.00s, 8.90GB read Requests/sec: 15871.44 Transfer/sec: 0.89GB 

At least if you know a line in which your transmission contains only ascii characters, replace res.end(str) with res.end(new Buffer(str, 'binary')) . This will use the v8::String::WriteOneByte , which is much faster. Here are the results using this change:

 Running 10s test @ http://127.0.0.1:8011/ 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 827.55us 540.57us 7.03ms 97.38% Req/Sec 6.06k 1.11k 8.00k 85.93% 121425 requests in 10.00s, 6.81GB read Requests/sec: 12142.62 Transfer/sec: 696.89MB 
0
source

Source: https://habr.com/ru/post/1484100/


All Articles