Volume is not shared between Docker Swarm nodes

I am having a problem sharing folders between Docker containers running on different Docker Swarm nodes. My swarm consists of one manager and two workers.

I use this build file to deploy applications:

version: '3' services: redis: image: redis:latest networks: - default ports: - 6379:6379 volumes: - test-volume:/test deploy: replicas: 1 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure placement: constraints: [node.role == manager] logstash: image: docker.elastic.co/logstash/logstash:5.2.2 networks: - default volumes: - test-volume:/test deploy: placement: constraints: [node.role == worker] networks: default: external: false volumes: test-volume: 

I can confirm that the folder was mounted successfully in both containers using docker exec _id_ ls /test . But when I add the file to this folder with docker exec _id_ touch /test/file , the second container does not see the created file.

How to configure a swarm so that files appear in both containers?

+5
source share
1 answer

Volumes created in the docker route using the default driver are local to node. Therefore, if you put both containers on the same host, they will have a common volume. But when you put your containers on different nodes, a separate volume will be created on each node.

Now, to get bindings / volumes bindings to multiple nodes, you have the following options:

  • Use the cluster file system such as glusterfs, ceph and ... through the swarm nodes, and then use the binding bindings in your defenition service, pointing to shared fs.

  • Use one of the many storage drivers available for dockers that provide shared storage like flocker, ...

  • Switch to Kubernetes and take advantage of automatic volume provisioning using multiple backends through storage classes and requirements.

UPDATE: As @Krishna noted in the comments, Flocker was closed and there was not much activity in the github repo .

+17
source

Source: https://habr.com/ru/post/1265180/


All Articles