Windows has objects supported by the system: events, file access files, windows, timers, etc., which are not unlimited, so that all programs in the system can create something like objects no more than 50 KB (I'm not sure in the exact figure, but this is not very important for this question).
So, if a program runs for a very long time and creates a lot of objects and does not release them (just like a memory leak, but system objects leak here), the system finally ends up with objects and other programs that try to do what something that requires the creation of any new system objects will begin to receive error messages from system functions. For example, program A starts and loses all the objects available to the system, and then program B tries to open the file and fails only because the system does not have the resources to service this request. The only solution at this point is to restart program A, so that a resource leak will be fixed by the system.
Is there the same problem on Unix / Linux systems or are they somehow protected from this?
source
share