Work queue for nodejs?

I am starting to write a work queue for a node using the node cluster API and mongoose.

I noticed that there are many libs that already do this, but use redis and forking. Is there a good reason to fork and use the cluster API?

change , and now I also find this: https://github.com/xk/node-threads-a-gogo - too many options!

I would prefer not to add redis to the mix, as I already use mangoes. In addition, my requirements are very free, I would like perseverance, but could do without it for the first version.

Second part of the question: What are the most stable / used nodejs queue work queues out there today?

+6
source share
3 answers

I would like to keep track of this. My decision ended up creating your own cluster, where some of my cluster employees are specialized employees (i.e. they just have code for working in the workplace).

I use agenda to plan work.

Cron tasks are assigned by the cluster wizard. The remaining tasks are created in non-work clusters, as needed. (checking email, etc.)

Before that, I used kue , but dropped it because the rest of my application uses mongodb, and I did not like using redis just for scheduling.

+5
source

Have you tried https://github.com/rvagg/node-worker-farm ? It is very light weight and does not require a separate server.

+6
source

I personally partially relate to the cluster.

https://github.com/isaacs/cluster-master

The reason I like the host cluster is because it adds very little to the logic for marking up your process and gives you the ability to control the number of processes you are running and a little boot / restore to boot! I find over-inflated process control libraries tend to be unstable, and sometimes slow.

This library will be useful to you if the following is true:

  • Your module is pretty much asynchronous.
  • You do not have many different types of triggering events.
  • Events that fire require a small amount of work, but you have many similar events (like web servers).

The reason for the above list is the reason why a-gogo threads can be useful to you for various reasons. If you have several places in your code where there is a lot of work in your event loop, something like threads-a-gogo, which starts a thread specifically for this work, is surprising because you do not determine ahead of time how many workers spawning, but rather spawns them to do the job when necessary. Note: this can also be bad if it is likely that many of them will appear, if you start too many processes, everything may actually get bogged down, but I got distracted.

To summarize, if your module is pretty much asynchronous, then what you really want is a working pool. To minimize downtime when your process is not listening for events, and to maximize the amount of processor that you can use. If you do not have a very busy synchronous call, a single node event loop will have problems using even a single processor core. In this case, you are best off working with the cluster wizard. What I recommend does a little benchmarking, and see how many single kernels your program can use in the "worst case scenario". Say this is 33% of a single core. If you have a quad-core processor, you then report that the cluster wizard launches you 12 workers.

Hope this helps!

+4
source

Source: https://habr.com/ru/post/946445/


All Articles