I just want to limit the resources of some Docker containers in the docker file. The reason is simple: several applications / services are running on the host. Therefore, I want to avoid that one container can use, for example. all memory that harms other containers.
From the docs I learned, this can be done using resources
. But it is above deploy
. Therefore, I have to write my file to create docker, as in the following example:
php: image: php:7-fpm restart: always volumes: - ./www:/www deploy: resources: limits: memory: 512M
This gave me a warning:
WARNING. Some services (php) use the "deploy" key, which will be ignored. Compose does not support configuration deployment - use docker stack deploy
to deploy to a swarm.
And that seems true: docker stats
confirms that the container can use all ram from the host.
The documentation says:
Specify the configuration associated with deploying and starting services. This only takes effect when deployed to a swarm with a docker stack deployed and is ignored by docker-compose up and docker-compose.
But I do not need clustering. There seems to be no other way to limit resources using a file-docker-composer. Why is it impossible to specify some kind of memory
tag, such as start-parameter in docker run
?
Example: docker run --memory=1g $imageName
This works great for a single container. But I cannot use this (at least without breaking the clean separation of problems), since I need to use two different containers.
Edit: Temp workaround
I found out that I can use mem_limit
directly after downgrading from version 3 to version 2 (placing version: '2'
on top). But we are currently on version 3.1, so this is not a long solution. And the docs say deploy.resources
is a new replacement for v2 tags like mem_limit
.
Someday, version 2 is deprecated. So resource management is no longer possible with the latest versions, at least without a swarm? It seems that deterioration for me, can not believe it ...