I am working on a streaming twitter client - after 1-2 days of continuous work, I get memory usage> 1.4gigs (32-bit process), and soon after it reaches this amount, I will get an exception in memory in the code, which by essentially this (this code will be an error of <30 seconds on my machine):
while (true) { Task.Factory.StartNew(() => { dynamic dyn2 = new ExpandoObject();
I have profiled it, and it is definitely because of the class down in DLR (from memory - I do not have my details here) xxRuntimeBinderxx and xxAggregatexx.
This answer from Eric Lippert (microsoft) seems to indicate that I am doing expression parsing of objects behind the scenes that will never get GC'd, although there are no links in my code.
If this is the case, is there any way in the code above to prevent or reduce it?
My failure is to exclude dynamic usage, but I would prefer not to.
thanks
Update:
12/14/12:
ANSWER:
To get this specific example , in order to free up its tasks, it was necessary to execute (Thread.Sleep (0)), which would allow the GC to process freed tasks. I assume that the message / event loop is not allowed to handle in this particular case.
In the actual code that I used (TPL Dataflow), I did not call Complete () on the blocks because they had to be an infinite stream of data - the task will receive Twitter messages until Twitter sends them. In this model, there was never a reason to tell any of the blocks that they were made because they never executed BE while the application was starting.
Unfortunately, this does not look like the data flow blocks were never designed to be very long or handle unspeakable amounts of elements, because they actually contain a link to everything that was sent to them. If I am wrong, let me know.
So, the workaround is to periodically (based on the use of your memory - mine were every 100 thousand Twitter messages) released blocks and set them again.
In accordance with this scheme, memory consumption never exceeds 80 megabytes, and after reusing blocks and forced GC for a good estimate, the heap gen2 returns to 6 megabytes, and everything is fine.
10/17/12:
- "It does nothing useful." This example is just to let you quickly create a problem. It fell off several hundred lines of code that have nothing to do with the problem.
- "An endless loop that creates a task and, in turn, creates objects": Remember - it just quickly demonstrates the problem - the actual code sits there, waiting for more streaming data. In addition, looking at the code, all objects are created inside the Action <> lambda in the task. Why is this not cleared (after all) after it leaves the sphere of action? The problem also is not to do it too fast - it takes more than a day for the actual code to arrive at an exception from memory - it just makes it fast enough to try to do something.
- "Are freedoms guaranteed?" An object is an object, right? I understand that the scheduler just uses the threads in the pool and the lambda that it executes will be thrown after execution independently.