Languages ​​that run their code inside virtual machines (for example, Java (*), C # or Python) usually assign large amounts of (virtual) memory immediately upon startup. Part of this is necessary for the virtual machine itself, part is pre-allocated for sending to the application inside the virtual machine.
With languages ​​running under the direct control of the OS (for example, C or C ++), this is optional. You can write applications that dynamically use only the amount of memory they need. However, some applications / frameworks are still designed in such a way that they request more memory from the operating system once, and then manage the memory itself in the hope that it will be more efficient than the OS.
There are problems with this:
It is not necessarily faster. Most operating systems are already pretty smart at how they manage their memory. Rule number 1 on optimization, measurement, optimization, measurement.
Not all operating systems have virtual memory. There are some quite capable of them that cannot run applications that are so "sloppy", suggesting that you can easily allocate a lot of "not real" memory.
You have already figured out that if you turn your OS from “generous” to “strict”, these memory swamps fall on your nose .; -)
(*) Java, for example, cannot expand its virtual machine after it starts. You must specify the maximum size of the virtual machine as a parameter ( -Xmxn ). Thinking "Better Safe Than Sorry" leads to serious running-in of certain people / applications.
source share