The way I find it useful to think about is the memory on disk. RAM is a fast cache. Instead of thinking "when I exit RAM, the system will change it to disk," I think, "when I have available RAM, the system will move my memory to disk."
This is from how most people think about it, but I find it helps. RAM is just a performance optimization; the real limit on the amount of memory that you can allocate is the available disk space.
Of course, this is more complicated. On 32-bit operating systems, each process receives a user address space of 2 billion bytes. (And the same goes for the kernel address space, but ignore it.) Each page of memory that you can access, whether in RAM or on disk, must be in that address space. You can allocate more than 2 billion bytes, no problem. But you can only address 2 GB at a time. If you have 10 GB allocated, then at least 8 GB of it will not be displayed in the address space. In this case, you need to cancel something else, and then map what you want into the address space in order to get to it.
In addition, many things must be in the adjacent address space. For example, if you have a 1 MB stack, then there should be a million contiguous bytes in the address space.
When people have a "lack of memory" they do not have enough RAM; RAM is just a quick cache on disk. And they do not have enough disk space; there are a lot of things. They are almost always in situations where there is not enough satisfactory address space to satisfy demand.
The CLR memory manager does not implement these trendy map-and-unmap strategies for you; basically, you get a 2GB address space and it. If you want to do something fantastic, say with files with memory mapping, then you need to write code to manage the memory yourself.
source share