Processes that exceed thread stack size limits on RedHat Enterprise Linux 6?

I have several processes running on RHEL 6.3, but for some reason they exceed the size of the thread stack.

For example, the Java process is set to the -Xss256k stack size at runtime at startup, and the C ++ process is set to a 1 MB thread stack size using pthread_attr_setstacksize () in the actual code.

However, for some reason, these processes do not adhere to these restrictions, and I'm not sure why.

For example, when I run

pmap -x <pid> 

for a C ++ and Java process, I see hundreds of "anon" threads for each (which I confirmed are internal workflows created by each of these processes), but they have a dedicated value of 64 MB, not the limits set above:

 00007fa4fc000000 168 40 40 rw--- [ anon ] 00007fa4fc02a000 65368 0 0 ----- [ anon ] 00007fa500000000 168 40 40 rw--- [ anon ] 00007fa50002a000 65368 0 0 ----- [ anon ] 00007fa504000000 168 40 40 rw--- [ anon ] 00007fa50402a000 65368 0 0 ----- [ anon ] 00007fa508000000 168 40 40 rw--- [ anon ] 00007fa50802a000 65368 0 0 ----- [ anon ] 00007fa50c000000 168 40 40 rw--- [ anon ] 00007fa50c02a000 65368 0 0 ----- [ anon ] 00007fa510000000 168 40 40 rw--- [ anon ] 00007fa51002a000 65368 0 0 ----- [ anon ] 00007fa514000000 168 40 40 rw--- [ anon ] 00007fa51402a000 65368 0 0 ----- [ anon ] 00007fa518000000 168 40 40 rw--- [ anon ] ... 

But when I run the following from the above process with all 64MB threads of 'anon'

 cat /proc/<pid>/limits | grep stack Max stack size 1048576 1048576 bytes 

it shows a maximum thread stack size of 1 MB, so it’s a bit confused as to what is going on here. In addition, the script that invokes these programs sets "ulimit -s 1024".

It should be noted that this only happens when using very high-performance machines (for example, 48 GB of RAM, 24 processor cores). The problem does not appear on less powerful machines (for example, 4 GB of RAM, 2 CPU cores).

Any help in understanding what is going on here would be greatly appreciated.

+4
source share
4 answers

It turns out that RHEL6 2.11 changed the stream model so that each stream, where possible, gets its own thread pool, so on a larger system you can see that it spans up to 64 MB. At the 64-bit level, the maximum number of thread pools is greater.

The fix for this was to add

 export LD_PRELOAD=/path/to/libtcmalloc.so 

in a script that starts processes (instead of using glibc2.11)

Additional information on this is available through:

Linux glibc> = 2.10 (RHEL 6) malloc may show excessive use of virtual memory

glibc bug malloc uses excess memory for multi-threaded applications http://sourceware.org/bugzilla/show_bug.cgi?id=11261

Apache hasoop fixed the problem by setting MALLOC_ARENA_MAX https://issues.apache.org/jira/browse/HADOOP-7154

+6
source

The stack size specified in /proc/1234/limits is set with setrlimit (2) (perhaps PAM as login.)

I have no real idea why the actual stack segments seem to be 64 MB each. Perhaps your large server uses huge pages (but not on your desktop).

You can call setrlimit (possibly with built-in ulimit bash or built-in zsh built-in t23), for example. script calling your program.

0
source

You can use ulimit -s <size_in_KB> to set the maximum stack size for processes. You can see the current limit using ulimit -s too.

0
source

@rory With the answer to your answer, the 64 MB block address should be the heap address, but now the address looks like 00007fa50c02a000, which is the stack address, right?

0
source

Source: https://habr.com/ru/post/1443466/


All Articles