Difference between Husikorn workflows and Geroku workforce

I hope the community can clarify something for me and that others can benefit.

My understanding is that machine gun workers are essentially virtual copies of Heroku web speakers. In other words, Gunicorn workflows should not be confused with Heroku workflows (e.g. Django Woodwork Tasks).

This is because Gunicorn workflows focus on processing web requests (mainly using Heroku Web Dyno performance), while Heroku Worker Dynos specializes in remote API calls, etc., which are long-term background tasks .

I have a simple Django application that provides decent use of the Remote API, and I want to optimize the balance of resources. I also query the PostgreSQL database for most queries.

I know this is a very simplification, but am I thinking of things correctly?

Some relevant information:

https://devcenter.heroku.com/articles/process-model

https://devcenter.heroku.com/articles/background-jobs-queueing

https://devcenter.heroku.com/articles/django#running-a-worker

http://gunicorn.org/configure.html#workers

http://v3.mike.tig.as/blog/2012/02/13/deploying-django-on-heroku/

https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/gunicorn/

Other questions related to Quasi-related Help SO for those exploring this topic:

Website Troubleshooting in Nginx + Gunicorn + Django Stack

Performance degradation for Django with Gunicorn deployed in Heroku

Gun setup for Django on Heroku

Website Troubleshooting in Nginx + Gunicorn + Django Stack

+4
source share
1 answer

To provide an answer and prevent users from searching by comment, dyno is like an entire computer. Using Procfile , you give each of your dynos one command to run, and it scrolls this command, runs it periodically to update it, to run it when it fails. As you can imagine, it’s rather wasteful to spend the whole computer on a single-threaded web server and what’s there Gunicorn .

The main stream of Gunicorn does nothing but a proxy server, generates a certain number of copies of your application (workers), distributing HTTP requests among them. He uses the fact that there are several cores on each dynot. As already mentioned, the number of workers you have to choose depends on how much memory your application needs.

Contrary to what Bob Spring said in the last comment, there are other ways to use this feature for parallelism to run separate servers on the same dinodera. The easiest way is to create a separate sub-file and run the equivalent of all-Python Foreman , Honcho , from your main Procfile file, following these instructions . Essentially, in this case, your only dyno command is a program that manages several single commands. It is similar to the fact that he is given one desire from the genie and make it wish 4 more wishes.

The advantage of this is that you can fully utilize the capabilities of your dinosaurs. The disadvantage of this approach is that you lose the ability to scale individual parts of your application independently when they share the dynamic. When you scale a dynamometer, it will scale everything that you multiplexed on it, which may be undesirable. You may have to use diagnostics to decide when to put the service on your own dedicated dyno.

+13
source

Source: https://habr.com/ru/post/1438732/


All Articles