We recently encountered a problem with Ruby inside a Docker container. Despite the rather low load, the application tends to consume huge amounts of memory and after a while under the mentioned load it is OOMs.
After some research, we narrowed down the problem to a one-line
docker run -ti -m 209715200 ruby:2.1 ruby -e 'while true do array = []; 3000000.times do array << "hey" end; puts array.length; end;'
On some OOMed machines (the oom-killer was killed because the limit was exceeded) shortly after the start, but on some it worked, albeit slowly, without OOM. It seems that (it only seems, maybe this is not so) in some configurations ruby โโis able to display the limits of cgroup and configure GC.
Checked configurations:
- CentOS 7, Docker 1.9 - OOM
- CentOS 7, Docker 1.12 - OOM
- Ubuntu 14.10, Docker 1.9 - OOM
- Ubuntu 14.10, Docker 1.12 - OOM
- MacOS X Docker 1.12 - No OOM
- Fedora 23 Docker 1.12 - No OOM
If you look at the memory consumption of the ruby โโprocess, in all cases it acted similarly to this picture, remaining at the same level just below the limit, or crashed into the limit and was killed.
Memory consumption graph
We want to avoid OOM at all costs, as this reduces fault tolerance and creates the risk of data loss. The memory really needed for the application is below the limit.
Do you have any suggestions on what to do with ruby โโto avoid OOMing, possibly losing performance?
We cannot understand what are the significant differences between the tested installations.
Edit: changing the code or increasing the memory limit is not available. The first one is because we run with community plugins that we do not control with, the second because it does not guarantee that we will not encounter this problem again in the future.
source share