Redis and Node.js and Socket.io Questions

I just studied redis and node.js. There are two questions that I have for which I could not find a satisfactory answer.

My first question is reusing redis clients in node.js. I found this question and answer: How to reuse a redis connection in socket.io? but that did not satisfy me enough.

Now, if I create a redis client in a connection event, it will be created for each connection. So, if I have 20k concurrent users, there will be 20,000 redis clients.

If I put it outside the connection event, it will be created only once.

The response says that it creates three clients for each function, outside the connection event.

However, from what I know MySQL, when writing an application that spawns child processes and runs in parallel, you need to create your MySQL client as part of the function in which you create child instances. If you create it outside of it, MySQL will give the error "MySQL server is gone" because child processes will try to use the same connection. It must be created for each child process separately.

That way, even if you create three different redis clients for each function, if you have 30k concurrent users who send 2k messages at the same time, you should run into the same problem, right? Thus, each "user" must have its own redis client in the connection event. I'm right? If not, how does node.js or redis handle concurrent queries other than MySQL? If it has its own mechanism and creates something like child processes in the redis client, why do we need to create three different redis clients? You need to be enough.

Hope this question was clear.

- UPDATE -

I found the answer to the following question. http://howtonode.org/control-flow No need to answer, but my first question is still valid.

- UPDATE -

My second question. I am also not so good at JS and node.js. So, from what I know, if you need to wait for an event, you need to encapsulate the second function in the first function. (I do not know the terminology yet). Let me give an example:

socket.on('startGame', function() { getUser(); socket.get('game', function (gameErr, gameId) { socket.get('channel', function (channelErr, channel) { console.log(user); client.get('games:' + channel + '::' + gameId + ':owner', function (err, owner) { //games:channel.32:game.14 if(owner === user.uid) { //do something } }); } }); }); 

So, if I study it correctly, I need to run each function inside the function, if I need to wait for an I / O response. Otherwise, the node.js locking mechanism will allow the first function to work, in this case it will get the result in parallel, but the second function may not have the result if it takes time to get it. So, if you get the result from redis, for example, and you use the result in the second function, you must encapsulate it in the redis get function. Otherwise, the second function will work without receiving a result.

So, in this case, if I need to run 7 different functions, and the 8th function needs the result of all of them, do I need to write them this way, recursively? Or am I missing something.

Hope this was clear too.

Many thanks,

+6
source share
3 answers

Thus, each "user" must have its own redis client in the connection event. I'm right?

Actually, you are not :)

The fact is that node.js is very different, for example, from PHP. node.js does not spawn child processes on new connections, which is one of the main reasons why it can easily handle a large number of simultaneous connections, including long-lived connections (Comet, Websockets, etc.). node.js processes events sequentially using an event queue in a single process. If you want to use multiple processes to use multi-core servers or multiple servers, you will have to do it manually (how to do this, however, is beyond the scope of this question).

Thus, the absolutely correct strategy is to use one single Redis (or MySQL) connection to serve a large number of clients. This avoids the overhead of creating and terminating a database connection for each client request.

+2
source

Thus, each "user" must have its own redis client inside the connection event. I'm right?

You should not create a new Redis client for each connected user, which is not suitable for this. Instead, just create a maximum of 2-3 clients and use them.

For more information check this question:

How to reuse redis connection in socket.io?

0
source

Regarding the first question: β€œThe correct answer” may make you think that you are good with one Connection. In fact, whenever you do something waiting for I / O, a timer, etc., you actually force node to start the wait method in the queue. Therefore, if you use only one single connection, you will actually limit the performance of the thread you are running on (one processor) to redis speed - this is probably several hundred callbacks per second (non-redis expect callbacks still going on) - although this is not a bad performance, there is no reason to create such a limitation. It is recommended that you create several (5-10) compounds to avoid this problem as a whole. This number increases for slower databases, for example. MySQL, but it depends on the type of queries and code features.

Remember that for maximum performance, you must run several workers on your server for the number of processors. Regarding the second question: It is much better to name functions one by one and use names in the code, and not define it as you go. In some situations, this will reduce memory consumption.

0
source

Source: https://habr.com/ru/post/906503/


All Articles