Linux redefines memory - why?

As I read about the recently discovered Apache security vulnerability, I learned that Linux uses an interesting memory allocation strategy: it always allows malloc to succeed and lazily allocates memory on first use. If there is not enough memory, it randomly selects a process somewhat ad hoc (giving more weight to processes that are unprivileged, tried to allocate a lot of memory, etc.) and kills it.

It seems to me that this is not only incompatible behavior (in C at least see the optimized Linux malloc: will it always always be thrown when out of memory? ), And deprives the program of the ability to gracefully handle low memory problems, but also does not stretch the imagination to come up with scenarios where killing an arbitrary process leads to catastrophic data loss or corruption, or to real crashes. Why is this design generally a good idea? Am I missing something? What were the design rationales?

+4
source share

Source: https://habr.com/ru/post/1369011/


All Articles