I am trying to understand how appropriate it is to try to pinpoint that there is a potential memory leak in a block of .NET managed code programmatically. The reason for this is to isolate some block of code that appears to be a memory leak, and then use the standard profiler to further determine the actual cause of the leak. In my particular business case, I am loading a third-party class that extends one of mine to check for leaks.
The approach that first comes to mind looks something like this:
- Wait for the GC to start.
- Get the current allocated memory from the GC.
- [Run the managed code block.]
- Wait for the GC to start.
- Get the current allocated memory from the GC and subtract from the allocated memory written before the code block started. Is it correct that theoretically the theoretical difference should be (close) 0 if all the objects selected in the code block that were launched were dereferenced accordingly and assembled?
Of course, the immediate problem is that it will probably wait ... and wait ... and wait until there is a deterministic GC. If we skip this aspect, the calculation to determine that a block of code has leaked into any memory, however, can vary greatly and will not necessarily be accurate, as some elements may not be collected at that time.
Is this the above, my best option is to try to determine somewhat more accurately if the code block is a memory leak? Or are there other working methods that are used in real life? Thanks.
source share