Check with file and ldd that your executable is really 64 bits.
Check also resource limits. From inside the process, you can use the getrlimit system call (and setrlimit to change them if possible). From bash try ulimit -a . From the zsh shell, try limit .
Check also that your process really eats the memory that you think it consumes. If its pid is 1234, you can try pmap 1234 . From inside the process, you can read /proc/self/maps or /proc/1234/maps (which you can read from the terminal). There are also /proc/self/smaps or /proc/1234/smaps and /proc/self/status or /proc/1234/status and other files inside your /proc/self/ ...
Check with free that you got the memory (and swap space) that you count. You can add temporary swap space using swapon /tmp/someswapfile (and use mkswap to initialize it).
As a rule, I could run a 7Gb process (huge cc1 compilation) under Gnu / Linux / Debian / Sid / AMD64 on a machine with 8Gb RAM a few months (and a couple of years ago).
And you can try using a tiny test program that, for example, allocates several blocks of memory with malloc , for example. 32 MB each. Remember to write a few bytes inside (at least on every megabyte).
Standard C ++ containers, such as std::map or std::vector , are rumored to consume more memory than we usually think.
If necessary, get more RAM. These days it's pretty cheap.
source share