SQS job / queue task retry counting strategy?

I execute a task queue using Amazon SQS (but I think the question applies to any task queue), where employees are expected to take different actions depending on how many times the repeated task has been repeated (move it to another queue, increase visibility timeout, send alert..etc)

What would be the best way to track a bad job score? I would like you to not have a centralized db for the job: retry-count records. Should I look at the time spent in the queue instead during the monitoring process? IMO, which at best would be ugly or unclean, repeating work until I find the ancients.

thank! Andras

+3
source share
4 answers

Amazon just released a simple workflow sequence (swf), which you can imagine as a more complex / flexible version of the GAE task queues.

This allows you to control your tasks (using beeps), set up retry strategies, and create complex workflows. This seems like a pretty promising abstraction of task dependencies, scheduling, and fault tolerance for tasks (especially asynchronous)

Checkout http://docs.amazonwebservices.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html for review.

+1
source

I had a good success combining SQS with SimpleDB. It is "centralized", but only SQS.

simpleDB SQS. , SimpleDB, . , simpleDB, . , , , . , SimpleDB ( , , , , , ) SQS.

, , . - , ​​ , ..

SimpleDB , , ..

+5

SQS means "Simple Queue Service", which by definition is the wrong name for this service. The first and main feature of the queue is FIFO (First in, First out), and SQS is not enough. Just want to clarify.

In addition, Azure Queue Services is also missing. For the best cloud queue service, use the Azure Service Bus, as it is the TRUE Queue concept.

0
source

Source: https://habr.com/ru/post/1790904/


All Articles