I have a django application that calls an asynchronous task in a set of requests (using celery). A task takes a set of queries and performs a whole bunch of operations that could potentially take up a lot of time based on the objects in it. Objects can be divided between sets of requests, so the user can send a task to a request containing objects that are already running, and that a new task should be performed only on objects that are not already running, but wait for all objects to complete before they return.
My explanations are a bit confusing, so imagine the following code:
from time import sleep import redis from celery.task import Task from someapp.models import InterestingModel from someapp.longtime import i_take_a_while class LongRunningTask(Task): def run(self, process_id, *args, **kwargs): _queryset = InterestingModel.objects.filter(process__id=process_id) r = redis.Redis() p = r.pipeline() run_check_sets = ('run_check', 'objects_already_running')
I am extremely new to Redis, so my main question is is there a better way to manipulate Redis to achieve the same result. More broadly, I wonder if Redis / needs the right approach to solve this problem. There seems to be a better way for Django models to interact with Redis. Finally, I wonder if this code is really thread safe. Can anyone punch holes in my logic?
Any comments are welcome.
source share