Node.js v8 garbage collection doesn't seem to work

I am confused since my application has a memory leak. This is a tcp server that processes hundreds of thousands of packets per minute. I checked the code, improved it, and profiled the memory.

Everything looks fine, testing locally with low traffic actually shows that gc frees up memory correctly. But on a live heavy traffic server this is not the case.

So, I tried using the expose-gc parameter and added a forced gc to each shutdown, and now I find that the memory is no longer leaking or was leaking anyway?

So, my conclusion: gc was not activated. My server has 3 GB of memory, and the application can eat 2.8 GB in just a few hours.

Now with forced gc the application no longer leaks. It supports about 200 MB of memory.

So my question is why gc did not start?

+6
source share
2 answers

Source: StrongLoop Blog

The main problem of garbage collection is the identification of dead areas of memory (inaccessible objects / garbage) that are inaccessible through some chain of pointers from an object that is alive. once these regions can be reused for new distributions or released back to the operating system

A memory leak when using event listeners is quite common and is caused by a situation where the object that is being listened cannot be collected from garbage, because the emitter of the event, also the object, keeps a reference to it.

This way your onSuccess code will reference your request object. However, onSuccess is only one function that is reused as a listener for all request objects, so this should not lead to memory accumulation.

To find the real reason for leaving objects in your code, I would check the end of the connection and make sure that no pointers were left alive. In addition, in some cases, V8 will not instantiate a function for each use, and this may be the case. If both are combined, your allocated memory will continue to collect callback instances.

Moving objects that survived two small collections of garbage to the "old space". The old space is garbage collected in full by the GC (major garbage collection cycle), which is much less common. I suppose this is far, but the choice of what you distribute faster than sweeping cycles can be considered, and thus, when you start garbage collection manually, the full GC cycle starts.

To ensure quick distribution of objects, short pauses in garbage collection and "without fragmentation of V8 memory", a stop world, a generator, an accurate garbage collector are used. V8 essentially stops the program from executing a full garbage collection cycle.

This may explain why memory does not flow when the V8 is forcibly cleared of garbage.

+1
source

I notice the same problem that garbage collection does not occur. Re-testing my server for the same content (Rendered by react) to the test memory of the script will be exhausted in an hour or so. This AWS instance has only 1 GB of RAM.

After forced garbage collection, whenever heapUsed exceeded 256 MB, everything works fine for several hours in a row. heapUsed ranges from approximately 150 MB after the GC to just over 256 MB.

Meanwhile, heapTotal will eventually stabilize at approximately 340 MB.

I came to the conclusion that there is no memory leak on my server, garbage collection does not happen as expected.

That's what I'm doing:

 setInterval (function() { let mu = process.memoryUsage(); console.log('heapTotal:', mu.heapTotal, 'heapUsed:', mu.heapUsed); if (mu.heapUsed > 256 * 1024 * 1024) { console.log('Taking out the garbage'); global.gc(); } }, 1000 * 60); 

It might be better to check the memory after each request.

0
source

Source: https://habr.com/ru/post/981974/


All Articles