How can I implement this single distributed concurrency queue on any MQ platform?

I am currently trying to find a solution for implementing a specific type of queue that requires the following features:

  • The entire queue must comply with the order of adding tasks.
  • The entire queue will have a concurrency of 1, which means that only one task will be executed at a time per queue , and not a worker.
  • There will be more than a thousand lines.
  • It must be distributed and scalable (for example, if I add a worker)

Basically, this is one FIFO process queue, and this is exactly what I want when I test other message queuing software, such as ActiveMQ or RabbitMQ, but as soon as I scale it to 2 workers, it just doesn't work, because this case I want it to scale and support the same feature of a single process queue. Below I will attach a description of how it should work in a distributed environment with several workers.

An example of what the topology looks like: (Note that this is a many-many relationship between Queue and Workers )

FIFO Distributed Queue

An example of how it will work:

+------+-----------------+-----------------+-----------------+ | Step | Worker 1 | Worker 2 | Worker 3 | +------+-----------------+-----------------+-----------------+ | 1 | Fetch Q/1/Job/1 | Fetch Q/2/Job/1 | Waiting | +------+-----------------+-----------------+-----------------+ | 2 | Running | Running | Waiting | +------+-----------------+-----------------+-----------------+ | 3 | Running | Done Q/2/Job/1 | Fetch Q/2/Job/2 | +------+-----------------+-----------------+-----------------+ | 4 | Done Q/1/Job/1 | Fetch Q/1/Job/2 | Running | +------+-----------------+-----------------+-----------------+ | 5 | Waiting | Running | Running | +------+-----------------+-----------------+-----------------+ 

This is probably not the best view, but it shows that even in Queue 1 and Queue 2 there are more tasks, but Worker 3 does not begin to receive the next task until the end of the previous one.

This is what I am trying my best to find a good solution.

I tried many other solutions, such as rabbitMQ, activeMQ, apollo ... They allow me to create thousands of queues, but all of them, as I try, will use worker 3 to start the next job in the queue. And concurrency per worker

Is there any solution that can make this possible on any MQ platform like ActiveMQ, RabbitMQ, ZeroMQ, etc.?

Thank:)

+1
queue amazon-sqs redis message-queue
Feb 01 '17 at 12:08 on
source share
1 answer

You can achieve this by using Redis lists with an optional send queue, for which all BRPOP workers BRPOP included for their jobs. Each job in the dispatch queue is marked with the original identifier of the queue, and when the employee has completed the task, he goes to this initial queue and performs RPOPLPUSH in the dispatch queue to make the next job available to any other employee. Thus, the send queue will contain a maximum of num_queues elements.

One thing you have to handle is the initial set of the send queue when the original queue is empty. It can be just a check made by the publisher against the “empty” flag for each queue set, and also set by the worker when there is nothing to send in the original queue. If this flag is set, the publisher can simply LPUSH execute the first job directly in the send queue.

+2
Feb 01 '17 at 13:19
source share



All Articles