I am using the Perl beanstalkd client. I need a simple way not to insert the same job twice.
I need something that basically needs to wait until there are K elements and then groups them together. For this, I have a producer:
insert item(s) into DB
insert a queue item into beanstalkd
And the consumer:
while ( 1 ) {
beanstalkd.retrieve
if ( DB items >= K )
func_to_process_all_items
kill job
}
This is a linear number of requests / processing, but in the case of:
insert 1 item
... repeat many times ...
insert 1 item
Assuming that all of these inserts occurred before the job was restored, this will add N queue elements and it will do something like this:
check DB, process N items
check DB, no items
... many times ...
check DB, no items
Is there a smarter way to do this so that it does not insert / handle later job requests unnecessarily?
Timmy source
share