First, make sure that you are running on a 64-bit processor in 64-bit mode. On a 32-bit CPU, the address space of your process is only 2 bytes (sup> 32) (four gigabytes), and there is no way to install 100 GB in all of this at once - there is simply not enough address. (In addition, most of this address space will already be used by other mappings or reserved by the kernel.)
Secondly, problems can arise, even if the mapping fits into the address space. The memory that is displayed in your process (this also includes, for example, your program code and data segments, as well as shared libraries) is divided into units of pages (usually 4 KB for each x86), where each page requires some metadata in core and MMU . This is another resource that can be exhausted when creating huge memory mappings.
As pointed out in Mmap () the entire large file , you can try using MAP_SHARED . This can allow the kernel to allocate memory for display lazily, because pages are being sent to them, because it knows that it can always change the page to a file on disk if there is not enough memory. With MAP_PRIVATE kernel should allocate a new page every time the page changes (since this change should not be performed), which would be unsafe to do lazily if the system runs out of memory and swaps.
You may also need to pass MAP_NORESERVE to mmap() when allocating more memory than physical memory, or set /proc/sys/vm/overcommit_memory (see proc(5) ) to 1 (which is a bit ugly because although).
On my system, which is similar to yours with 8 GB of RAM and 8 GB of swap, only MAP_SHARED enough for a 40 GB mmap() file. MAP_PRIVATE works with MAP_NORESERVE too.
If this does not work, you are likely to come across a MMU restriction. Many modern processor architectures support huge pages that are larger than the default page size. The point of huge pages is that you need fewer pages to match the same amount of memory (assuming a large match), which reduces the amount of metadata and can make address translation and context switches more efficient. The disadvantage of large pages is a decrease in granularity of the display and an increase in loss (internal fragmentation) when only a small part of the page is used.
MAP_SHARED and some random file with huge pages are unlikely to work, by the way (in case MAP_SHARED not enough to fix the problem). The file must be in hugetlbfs .
Passing MAP_HUGETLB to mmap() of query distributions using huge pages (although this might just be for anonymous comparisons, where it also seems that huge pages should be automatic on many systems nowadays). You may also need to cheat with /proc/sys/vm/nr_hugepages and /proc/sys/vm/nr_overcommit_hugepages - see this thread and the Documentation / vm / hugetlbpage.txt file in the kernel sources.
Beware of alignment issues when writing your own memory allocator. Hope this is not too sluggish, but see this answer .
As a side note, any memory you get from a memory mapped file must actually exist in the file. If the file is smaller than matching, and you want to have access to "extra" memory, you can first enlarge the file using ftruncate(2) . (This may not greatly increase the size of the disk if the file system supports sparse files with file openings.)