Scaling socket.io with HAProxy

So far I have had one node.js. application. run socket.io. As the number of users grows, it reaches almost 100% of CPU time, so I decided to divide users into several node.js. processes. I split my node.js application logic to allow the outline of users on different subdomains. I also extracted the session code into the token passing through the URL, so cookies are not important.

I would like to use at least 4 cores of my 8-core machine, so I want to run several node.js processes, each of which serves the application on a subdomain. In order for all node.js to be accessible through port 80, I decided to use HAProxy. The setup is as follows:

domain.com -> haproxy -> node on 127.0.0.1:5000 sub1.domain.com -> haproxy -> node on 127.0.0.1:5001 sub2.domain.com -> haproxy -> node on 127.0.0.1:5002 sub3.domain.com -> haproxy -> node on 127.0.0.1:5003 

Now everything works, but an integral part of the application (without using socket.io) is very slow. It is written using Express.js, and it works quickly when I open the page directly (i.e. Not through HAProxy). In addition, connecting to socket.io works quickly with the XHR transport, but the Websocket transport also takes a long time to establish a connection. Once the connection is established, it works well and quickly.

I have never used HAProxy before, so I probably misconfigured it. Here is my HAProxy configuration:

 global maxconn 50000 daemon defaults mode http retries 1 contimeout 8000 clitimeout 120000 srvtimeout 120000 frontend http-in bind *:80 acl is_l1 hdr_end(host) -i sub1.domain.com acl is_l2 hdr_end(host) -i sub2.domain.com acl is_l3 hdr_end(host) -i sub3.domain.com acl is_l0 hdr_end(host) -i domain.com use_backend b1 if is_l1 use_backend b2 if is_l2 use_backend b3 if is_l3 use_backend b0 if is_l0 default_backend b0 backend b0 balance source option forwardfor except 127.0.0.1 # stunnel already adds the header server s1 127.0.0.1:5000 backend b1 balance source option forwardfor except 127.0.0.1 # stunnel already adds the header server s2 127.0.0.1:5001 backend b2 balance source option forwardfor except 127.0.0.1 # stunnel already adds the header server s2 127.0.0.1:5002 backend b3 balance source option forwardfor except 127.0.0.1 # stunnel already adds the header server s2 127.0.0.1:5003 
+4
source share
1 answer

I get it. I could not find this in the docs, but the global maxconn setting does NOT apply to the interface. Frontend has 2,000 simultaneous connections by default, and everything that came next was queued. Since I have long-standing socket.io connections, this created problems.

The solution is to explicitly set maxconn in the frontend section.

+3
source

Source: https://habr.com/ru/post/1434314/