Does mono / .Net GC free the allocated memory back to the OS after collection? if not, why?

I heard many times that as soon as a C # managed program requests more memory from the OS, it will not free it if the system does not have enough memory. For instance. when an object is collected, it is deleted, and the memory occupied by the object can be reused by another managed object, but the memory itself does not return to the operating system (for example, mono on unix will not call brk / sbrk to reduce the amount of virtual memory available to the process , to what was before its allocation).

I don’t know whether this is really happening or not, but I see that my C # applications running on Linux use a small amount of memory at the start, and then when I do something expensive, it allocates more, but later. when all objects are deleted (I can verify that by placing a debug message in destructors), memory is not free. On the other hand, memory is no longer allocated when the memory is restarted. The program simply continues to consume the same amount of memory until it stops.

Perhaps this is just my misunderstanding of how GC in.net works, but if it is, then why? What is the advantage of saving the allocated memory for later rather than returning it back to the system? How can he even know if the system is needed back or not? What about another application that might crash or might not start due to the OOM caused by this effect?

I know that people will probably answer something like "GC manages memory better than you ever could, just don't care about it" or "GC knows what it does best" or "it doesn't have any values, it's just virtual memory ", but it matters, but on my 2gb laptop I launch OOM (and the OOM killer kernel starts because of this) very often when I launch any C # applications after a while from - for this irresponsible memory management.

Note. I tested it all on mono on Linux because it is really difficult for me to understand how windows manage memory, so debugging on Linux is much easier for me, and Linux memory management is the source code, Windows / .Net kernel memory management is pretty for me a riddle

+6
source share
2 answers

The memory manager works this way because there is no benefit in having a lot of unused system memory when you don't need it.

If the memory manager always tried to have as little memory as possible, that would mean that he would do a lot of work for no reason. This will only slow down the application, and the only advantage will be more free memory that the application does not use.

Whenever the system needs more memory, it reports that running applications are returned as much as possible. The same signal is also sent to the application when it is minimized.

If this does not work with Mono on Linux, this is a specific implementation problem.

+4
source

Typically, if an application needs memory once, it will be needed again. Releasing memory back to the OS just to request back is overhead, and if memory wants nothing: why bother ?. He is trying to optimize for a very likely scenario his wishes again. In addition, returning it requires integer / adjacent blocks that can be returned, which greatly affects things like compaction: it’s not as simple as β€œhey, I don’t use most of this: return it”, you need to find out which blocks can be released, apparently after a complete collection and a compact (move objects, etc.) cycle.

+3
source

Source: https://habr.com/ru/post/970866/


All Articles