One of the most common mistakes that occurs when you first become aware of multithreading is the belief that multithreading is Free Lunch.
In truth, dividing your operation into several smaller operations, which can then be done in parallel, will take some extra time. If you are poorly synchronized, your tasks may spend even more time waiting for other tasks to release their locks.
As a result; parallelization is not worth the time / problems, when each task will work little, which is the case with OperationDoWork .
Edit:
Think about it:
private static void OperationDoWork(int i) { double a = 101.1D * i; for (int k = 0; k < 100; k++) a = Math.Pow(a, a); }
According to my test, for will be on average up to 5.7 seconds, and Parallel.For will take 3.05 seconds on my Core2Duo processor (speedup == ~ 1.87).
On my Quadcore i7, I get an average of 5.1 seconds with for and an average of 1.38 seconds with Parallel.For (speedup == ~ 3.7).
This modified code scales very well to the number of available physical cores. QED
source share