Why is the process killing at 4 GB?

I wrote a program that works with a huge data set. My processor and OS (Ubuntu) both have 64 bits, and I have 4 GB of RAM. Using the "top" (% Mem field), I saw that the memory consumption of the process increased to about 87%, i.e. 3.4 + GB and then it was killed.

Then I checked how much memory the process can get using "uname -m", which comes out "without limits."

Now, since the OS and CPU are 64 bits, and there is also a swap partition, the OS had to use virtual memory, i.e. [> 3.4GB + yGB from the swap space], and only if the process required more memory, it should have been killed.

So, I have the following questions:

  • How much physical memory can theoretically access a process on 64-bit m / c. My answer is 2 ^ 48 bytes.
  • If there are less than 2-448 bytes of physical memory, then the OS should use virtual memory, right?
  • If the ans to above request is YES, then the OS also had to use the SWAP space, why did it kill the process without using it. I don’t think we should use some specific system calls that encode our program for this to happen.

Please offer.

+4
source share
3 answers

Check with file and ldd that your executable is really 64 bits.

Check also resource limits. From inside the process, you can use the getrlimit system call (and setrlimit to change them if possible). From bash try ulimit -a . From the zsh shell, try limit .

Check also that your process really eats the memory that you think it consumes. If its pid is 1234, you can try pmap 1234 . From inside the process, you can read /proc/self/maps or /proc/1234/maps (which you can read from the terminal). There are also /proc/self/smaps or /proc/1234/smaps and /proc/self/status or /proc/1234/status and other files inside your /proc/self/ ...

Check with free that you got the memory (and swap space) that you count. You can add temporary swap space using swapon /tmp/someswapfile (and use mkswap to initialize it).

As a rule, I could run a 7Gb process (huge cc1 compilation) under Gnu / Linux / Debian / Sid / AMD64 on a machine with 8Gb RAM a few months (and a couple of years ago).

And you can try using a tiny test program that, for example, allocates several blocks of memory with malloc , for example. 32 MB each. Remember to write a few bytes inside (at least on every megabyte).

Standard C ++ containers, such as std::map or std::vector , are rumored to consume more memory than we usually think.

If necessary, get more RAM. These days it's pretty cheap.

+1
source

This may not only be the reason for the size of the data. For example, run ulimit -a and check the size of the maximum stack. Do you have a reason for the murder? Install 'ulimit -c 20000' to get the kernel file, it shows you the reason when you check it with gdb.

+2
source

In what can be addressed to literally EVERYTHING, you need to fit into it, including graphics adapters, the OS kernel, BIOS, etc., and the amount that can be solved cannot be expanded by SWAP.

It is also worth noting that the process itself should also be 64-bit. And some operating systems can become unstable and, therefore, kill the process if you use excess RAM with it.

0
source

Source: https://habr.com/ru/post/1388820/


All Articles