I get duplicate messages in my cluster node.js / socket.io / redis pub / sub application

I am using Node.js, Socket.io with Redisstore, a cluster of Socket.io guys and Redis.

I have a pub / sub application that only works well on one Node.js Node. But, when it comes to heavy workloads, maxes produces only one server core, since Node.js is not written for multi-core machines.

As you can see below, now I use the Cluster module from Learnboost, the same people who make Socket.io.

But when I start 4 workflows, each browser client that logs in and subscribes receives 4 copies of each message that is published to Redis. If there are three workflows, there are three copies.

I assume that I need to somehow move the redis pub / sub function to the cluster.js file.

Cluster.js

var cluster = require('./node_modules/cluster'); cluster('./app') .set('workers', 4) .use(cluster.logger('logs')) .use(cluster.stats()) .use(cluster.pidfiles('pids')) .use(cluster.cli()) .use(cluster.repl(8888)) .listen(8000); 

App.js

 redis = require('redis'), sys = require('sys'); var rc = redis.createClient(); var path = require('path') , connect = require('connect') , app = connect.createServer(connect.static(path.join(__dirname, '../'))); // require the new redis store var sio = require('socket.io') , RedisStore = sio.RedisStore , io = sio.listen(app); io.set('store', new RedisStore);io.sockets.on('connection', function(socket) { sys.log('ShowControl -- Socket connected: ' + socket.id); socket.on('channel', function(ch) { socket.join(ch) sys.log('ShowControl -- ' + socket.id + ' joined channel: ' + ch); }); socket.on('disconnect', function() { console.log('ShowControll -- Socket disconnected: ' + socket.id); }); }); rc.psubscribe('showcontrol_*'); rc.on('pmessage', function(pat, ch, msg) { io.sockets.in(ch).emit('show_event', msg); sys.log('ShowControl -- Publish sent to channel: ' + ch); }); // cluster compatiblity if (!module.parent) { app.listen(process.argv[2] || 8081); console.log('Listening on ', app.address()); } else { module.exports = app; } 

client.html

 <script src="http://localhost:8000/socket.io/socket.io.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js"></script> <script> var socket = io.connect('localhost:8000'); socket.emit('channel', 'showcontrol_106'); socket.on('show_event', function (msg) { console.log(msg); $("body").append('<br/>' + msg); }); </script> 
+6
source share
2 answers

It turns out that this is not a problem with Node.js / Socket.io, I just did the wrong thing.

Not only did I publish on the Redis server due to the Node / Socket stack, I was still directly subscribed to the Redis channel. At both ends of the pub / substitution, I walked around the "Socket.io cluster with the Redis Store at the back end."

So, I created a small application (with Node.js / Socket.io / Express) that received messages from my Rails application and "announced" them to the Socket.io room using the socket.io-announce module. Now, using routing magic Socket.io, each working node will receive and send messages directly to the browser connected to them. In other words, there were no more duplicate messages since both pubs and sub were on the Node.js / Socket.io stack.

After I clear my code, I will show an example on github.

+3
source

I fought with the cluster and socket.io. Every time I use the cluster function (I use the Nodejs built-in cluster), I get a lot of performance issues and socket.io problems.

When I tried to investigate this, I looked for error reports and the like on socket.io git, and anyone who uses clusters or external load balancers on their servers seems to have problems with socket.io.

It seems that the problem is "the client, which is not related to the client, must reconnect", which you will see if you increase the detailed log. This shows up many times when socket.io runs in a cluster, so I think it comes back to this. IE client connects to a randomized instance in the socket.io cluster every time it makes a new connection (during authorization and subsequent polling of new data, it processes several http / socket / flash connections) and

For now, I will return to only one socket.io process at a time, this may be a mistake, but it may also be a flaw in the way socket.io is created.

Added: My way of solving this in the future will be to assign a unique port to each socket.io instance within the cluster, and then separate the client side cache selection.

+3
source

Source: https://habr.com/ru/post/902969/


All Articles