ElasticBeanstalk Docker, single container or multiple containers?

We are working on a new REST API that will be deployed to AWS ElasticBeanstalk using Docker. It uses Python Celery for scheduled tasks, which means that workers need a separate process to work, our current Docker configuration has three containers ...

Multi-docker:

09c3182122f7 sso "gunicorn --reload --" 18 hours ago Up 26 seconds sso-api f627c5391ee8 sso "celery -A sso worker" 18 hours ago Up 27 seconds sso-worker f627c5391ee8 sso "celery beat -A sso -" 18 hours ago Up 27 seconds sso-beat 

Traditional wisdom suggests that we should use the multi-container configuration on ElasticBeanstalk, but since all containers use the same code, using the same container configuration with Supervisord to control processes can be more efficient and simple from an OPS point of view.

Single container with add-on:

 [program:api] command=gunicorn --reload --bind 0.0.0.0:80 --pythonpath '/var/sso' sso.wsgi:application directory=/var/sso [program:worker] command=celery -A sso worker -l info directory=/var/sso numprocs=2 [program:beat] command=celery beat -A sso -S djcelery.schedulers.DatabaseScheduler directory=/var/sso 

If a multi-container configuration is allocated for each container in AWS memory, I think it’s more efficient to allow the container’s operating system to perform internal memory allocation, rather than explicitly installing it in each container. I don’t know enough about how the Docker Multi-container works under the hood on ElasticBeanstalk to reasonably recommend this or that way.

What is the best configuration for this situation?

+5
source share

Source: https://habr.com/ru/post/1261148/


All Articles