My current configurations are:
> cat /proc/sys/vm/panic_on_oom 0 > cat /proc/sys/vm/oom_kill_allocating_task 0 > cat /proc/sys/vm/overcommit_memory 1
but when I run the task, it will still be killed.
> ./test/mem.sh Killed > dmesg | tail -2 [24281.788131] Memory cgroup out of memory: Kill process 10565 (bash) score 1001 or sacrifice child [24281.788133] Killed process 10565 (bash) total-vm:12601088kB, anon-rss:5242544kB, file-rss:64kB
Update
My tasks are used for scientific calculations that cost a lot of memories, it seems that overcommit_memory=1
may be the best choice.
Update 2
In fact, I am working on a data analysis project that costs more than 16G
memory, but I was asked to limit it to approximately 5G
. It would be impossible to implement this requirement by optimizing the program itself, since the project uses many subcommands, and most of them do not contain parameters such as Xms
or Xms
in Java.
Update 3
My project must be an excessive system. Be that as it may, a3f , it seems that my applications prefer xmalloc
to crash when memory corruption fails.
> cat /proc/sys/vm/overcommit_memory 2 > ./test/mem.sh ./test/mem.sh: xmalloc: .././subst.c:3542: cannot allocate 1073741825 bytes (4295237632 bytes allocated)
I do not want to give up, although many scary tests make me exhausted. So please show me the way to the light; )
source share