Celery workers cannot connect to redis on docker

I have a docked setup with a Django application in which I use Celery tasks. Celery uses Redis as a broker.

Versioning:

  • Dock version 17.09.0-ce, build afdb6d4
  • docker-compose version 1.15.0, build e12f3b9
  • Django == 1.9.6
  • Django-celery-bit == 1.0.1
  • celery == 4.1.0
  • celery [Redis]
  • Redis == 2.10.5

Problem:

My celery workers seem to be unable to connect to the redis container located on the local host: 6379. I can connect to the redis server on the specified port. I can verify that redis-server is running in the container.

When I manually connect to an instance of the Celery docker and try to create a worker using the celery -A backend worker -l info , I get a notification:

[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address.. Trying again in 4.00 seconds...

Notes:

I can connect telnet to the redis container on port 6379. In the redis container, redis-server is running.

Is there anything else I'm missing? I walked quite far down the rabbit hole, but I feel that I am missing something very simple.

DOCKER CONFIGURATION FILES:

docker-compose.common.yml here
docker-compose.dev.yml here

+5
source share
2 answers

When you use docker-compose, you are not going to use localhost for inter-container communication, you should use the container node name assigned to them. In this case, the name of your container is redis redis . The top-level items under services: are your default host names.

So, for celery to connect to redis, you should try redis://redis:6379/0 . Since the protocol and service name are the same, I’ll talk a little more: if you named your redis service “butter-pecan-redis” in your docker composition, you would use redis://butter-pecan-redis:6379/0 .

In addition, docker-compose.dev.yml does not seem to have celery and redis on the shared network, which could lead to them being unable to see each other. I believe that they need to share at least one common network in order to be able to resolve their host names.

On the net in docker-compose there is an example in the first part of the paragraph, using docker-compose.yml to view it.

+5
source

You may need to add link and depends_on to your docker compose file, and then reference the containers by your host name.

Updated docker-compose.yml:

 version: '2.1' services: db: image: postgres memcached: image: memcached redis: image: redis ports: - '6379:6379' backend-base: build: context: . dockerfile: backend/Dockerfile-base image: "/backend:base" backend: build: context: . dockerfile: backend/Dockerfile image: "/backend:${ENV:-local}" command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3 ports: - 8000 links: - db - redis - memcached depends_on: - db - redis - memcached celery: image: "/backend:${ENV:-local}" command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1 environment: C_FORCE_ROOT: "yes" links: - db - redis - memcached depends_on: - db - redis - memcached frontend-base: build: context: . dockerfile: frontend/Dockerfile-base args: NPM_REGISTRY: http://.view.build PACKAGE_INSTALLER: yarn image: "/frontend:base" links: - db - redis - memcached depends_on: - db - redis - memcached frontend: build: context: . dockerfile: frontend/Dockerfile image: "/frontend:${ENV:-local}" command: 'bash -c ''gulp''' working_dir: /app/user environment: PORT: 3000 links: - db - redis - memcached depends_on: - db - redis - memcached 

Then configure the urls of redis, postgres, memcached, etc. via

  • redis://redis:6379/0
  • postgres://user: pass@db :5432/database
+2
source

Source: https://habr.com/ru/post/1273328/


All Articles