I have 3 main processing threads, each of which performs operations on ConcurrentDictionaries values โโusing Parallel.Foreach. Dictionaries vary in size from 1,000 elements to 250,000 elements.
TaskFactory factory = new TaskFactory(); Task t1 = factory.StartNew(() => { Parallel.ForEach(dict1.Values, item => ProcessItem(item)); }); Task t2 = factory.StartNew(() => { Parallel.ForEach(dict2.Values, item => ProcessItem(item)); }); Task t3 = factory.StartNew(() => { Parallel.ForEach(dict3.Values, item => ProcessItem(item)); }); t1.Wait(); t2.Wait(); t3.Wait();
I compared the performance (total runtime) of this construct by simply running Parallel.Foreach in the main thread, and the performance improved significantly (runtime was reduced about 5 times)
My questions:
- Is there something wrong with the approach above? If so, what and how can be improved?
- What is the reason for the different deadlines?
- What is a good way to debug and analyze this situation?
EDIT . To clarify the situation once again: I am mocking client calls in the WCF service, each of which goes on its own thread (the reason for the tasks). I also tried using ThreadPool.QueueUserWorkItem instead of a task without performance improvement. Objects in the dictionary have from 20 to 200 properties (only decimal numbers and strings), and there is no I / O activity
I solved the problem by queuing processing requests in a BlockingCollection and processing them one at a time
source share