What is the most elegant way to run celeryd in a Django project?

I start with Celery in my Django project and can't help but wonder: what is the most elegant way to get started with celery for a project?

Let me explain the reasons for this question.

The currently recommended way to start celery is apparently python manage.py celeryd in simpler settings and something in the /etc/init.d/celeryd start lines in more complex ones. However, in the first case, this process seems fragile, since the process does not start automatically, and the second requires a fairly small configuration for the project ( virtualenv , settings , etc.). Especially the latter demonstrates my general feeling that a celery worker is something deeply attached to its code base, and also deeply attached to the main project process, since a celery worker who has nothing to actually create tasks in it is practically useless (with one exception celerybeat ). Another problem with init.d is that they will need advanced logic to process several projects per server (with separate virtual environments, settings, paths, etc.).

So, I realized it can be quite elegant to configure to start celeryd along with my main process, for example. create it with mod_wsgi using Apache (similar to other settings) and thus kill it when the main process goes down ( /etc/init.d/apache2 stop ). However, I'm not entirely sure if there are any technical pitfalls that consider performance and / or security in this reasoning - this may be because I tried to figure it out and found nothing.

  • Are my arguments wrong, given the architecture of celery?
  • Can I somehow create a celeryd somewhere within mod_wsgi and is this reasonable?
  • How do you get started with celery in your projects?
+4
source share
1 answer

I start celery using manage.py celeryd and managing it with a supervisor. In every deployment that modifies the task, I just restart the celery after deploying and restarting apache. \

Edit:

We use chefs to manage our configurations for the supervisor and other processes, so this can be a little killed. We have one master supervisord.conf, which includes a separate configuration file for each site that we run. One of them might look like this:

 [program:celery_newproject] command = /srv/www/.virtualenvs/newprojectenv/bin/python /srv/www/projects/newproject/manage.py celeryd --concurrency=2 --settings=settings.alpha --pidfile=/var/run/celery/celery_newproject.pid user = wsgiuser environment = PYTHON_EGG_CACHE="/tmp/.python-eggs" 

When deployed, we just run

 sudo supervisorctl restart celery_newproject 

This restarts the supervisor process for celery and raises all the new tasks that you have defined.

There are other not elegant ways to do this. On my personal sites, I just run a cron job that checks for the presence of a .pid file and restarts the celery if it does not exist. Not very elegant, but it works on sites with low reliability.

+3
source

Source: https://habr.com/ru/post/1400732/


All Articles