Are Unix / Linux systems vulnerable to leakage of global kernel objects?

Windows has objects supported by the system: events, file access files, windows, timers, etc., which are not unlimited, so that all programs in the system can create something like objects no more than 50 KB (I'm not sure in the exact figure, but this is not very important for this question).

So, if a program runs for a very long time and creates a lot of objects and does not release them (just like a memory leak, but system objects leak here), the system finally ends up with objects and other programs that try to do what something that requires the creation of any new system objects will begin to receive error messages from system functions. For example, program A starts and loses all the objects available to the system, and then program B tries to open the file and fails only because the system does not have the resources to service this request. The only solution at this point is to restart program A, so that a resource leak will be fixed by the system.

Is there the same problem on Unix / Linux systems or are they somehow protected from this?

+3
source share
2 answers

They are subject to the same problems, but can to some extent be hardened / limited. Often, by default there is a restriction on each process, which is the lowest that can cause system problems. All you have to do is start a lot of processes. Some of these limits can be viewed with the command ulimit. Some * nixes have the ability to set limits for each user (see / etc / security / limits.conf for some Linux systems)

But in those cases when you delete the limit or have many processes doing bad things, the general limit of the system is usually related to available resources (memory)

, bash , :

:(){ :|:& };: 
+3

Unix/Linux:

  • ENFILE (23):
  • EMFILE (24):

- , - . , , PDP-11 (ENFILE) .

+2

Source: https://habr.com/ru/post/1796496/


All Articles