Hey.
I have a .NET application that runs a parallel simulation. It scans the parameters, so each parameter has its own cycle Parallel.Foreach. I usually set max parallelism to one, but one of the loops, to preserve memory requirements, since each parameter can take more values than I have available kernels (4).
The application is completely CPU bound and only does input / output at the end to output the results. I have only one data structure lock that captures the results, but it is accessed very rarely (once every few seconds). No matter how far I push parallelism (controlling max parallelism on the loops), I always get average CPU usage of around 50% (~ 2 cores).
I would like to know what interferes with higher CPU utilization. My hunch is this: 1) some call to .NET libraries that are synchronized, thus serializing the calls. 2) GC logging into the system to clean up resources (there is a lot of garbage created by the application).
Any ideas how to go about the investigation?
Many thanks.
source
share