Update
Apparently, the effect of memory backup in the docker swarm is not well documented, and they work as the best effort. To understand the effect of the memory reservation flag, check the documentation :
When a memory reservation is reserved, Docker detects a memory or low memory conflict and forces the containers to limit their consumption to the backup limit.
...
Memory redundancy is a soft-limit function and does not guarantee the limit will not be exceeded. Instead, the function tries to ensure that when memory is highly dependent, memory is allocated based on the prompts / backup settings.
To ensure that no other container runs on the same node, you need to set service limits. What you can do is specify the nodes in special swarm shortcuts and use these shortcuts to start services that will only run on nodes with the specified labels.
As described here , node labels can be added to node using the command:
docker node update --label-add hello-world=yes <node-name>
Then, inside your stack file, you can restrict the use of the container on nodes that have only the specified label and another container to avoid nodes marked with hello-world=yes .
my-service: image: hello-world deploy: placement: constraints: - node.labels.hello-world == yes other-service: ... deploy: placement: constraints: - node.labels.hello-world == no
If you want to run my-service replicas on multiple nodes and still have one container running on each node, you need to set the global my-service mode and add the same shortcut to the nodes on which you want the container to run.
Global mode ensures that exactly one container will run every node that satisfies the service limitations:
my-service: image: hello-world deploy: mode: global placement: constraints: - node.labels.hello-world == yes
Old answer:
You can set resource reservation as such:
version: '3' services: redis: image: redis:alpine deploy: resources: reservations: cpus: '1' memory: 20M