C # Decreased performance on cyclic / process termination

Overview
I came across a performance slowdown repeating MANY times through a calculator class. Iterations take about 3 million. Each at the beginning and take more time and more as the number of iterations grows (30 million per / per process). I need to stop the program / Restart execution when I left it in order to return to normal conditions (3 million / Per process).

WHAT AM I DOING
I have a scientific application that checks a set of parameters in a process. For example, I have N scenarios (i.e., a Combination of parameters), tested using a set of experiments, which consist of a calculator class that takes parameters in the input data, processes them in T possible XP conditions, and stores the output in ORM objects that after each iteration, it runs in the database. In other words, each of the N-parameter combinations is transmitted T times through a calculator.

Parameter combination : Params Set 1, Params Set 2, ...., Params Set N Experimental Set : XP Set 1 , XP Set 2 , ...., XP Set T 

So, I have combinations of NxT, N and T - about 256 each, which gives 65000+ iterations.

HOW DO I DO IT
I have a GUI for fixing parameter sets and starting background workers (one combination of parameters). Each working Backrgound loads the first set of T XP, executes the current set of parameters, proceeds to the next set of XP, etc. The report is generated after each separate iteration by the calculator (i.e., after each Nx / Tx) and an event is fired to populate the .NET Linq / SQL ORM objects (AgileFX) and store them in the SQL Server database.

PROBLEM
The process works fine with the first 30mn, and then slowly starts to drift, each iteration takes longer and longer (sound like a memory overflow or so ...)

HINT
Oddly enough, the experimenter noted very appropriately that the processing time increases linearly : + 3mn more than the processing time of the use-case. Which boils down to arithmetic progression (Tn + 1 = Tn + 3mn) I have a 12-core INTEL and 24 GB RAM.

+4
source share
2 answers

I think I found one part of the problem, but it did not solve the problem completely:

Objects sent to ORM through delegates registered by the listener, so each stream of computation was โ€œexistingโ€ in memory even after it ended. As a colleague said: "Even if you leave, if I still have your address in my registers, then you still live in the neighborhood."

By the way, the performance wizard in VS2010 works. Extremely insightful and useful for monitoring overall memory performance with precision and precision.

EDIT: SOLVING THE PROBLEM
The class responsible for firing background workers tracked some data in the tracker object, which was constantly growing and continuing and never flashed, more and more. I noticed this by carefully monitoring memory usage per object in the VS 2010 Performance Wizard.
I have advice with a clear understanding of the life cycle of objects and memory usage, although it can become tough when the application is large and complex.

+1
source

A quick suggestion, could you solve your problem through Memoization, avoiding re-computing what should have been known results?

Also remember that your garbage collector will not be able to do garbage collection if you have an object reference in some way!

+1
source

Source: https://habr.com/ru/post/1332336/


All Articles