Parallel.ForEach has its own MaxDegreeOfParallelism
OK, the heuristic built into Parallel.ForEach is very prone to a huge number of tasks over time (if your work items have a delay of 10 ms, you get hundreds of tasks in an hour or so - I measured it). Really terrible design flaw, don't try to imitate it.
With parallel IOs, there is no alternative to empirically determining the correct value. That is why TPL is so bad in it. For example, magnetic disks performing serial IO have DOP 1. An SSD makes random sympathies mostly endless (100?).
The remote web service does not let you know the correct DOP. You not only need to test, you need to ask the owner to allow spam services with requests that can overload it.
I would define 1000 as extra?
Then you do not need this tool at all. Just create all the tasks and then wait for them all. But 1000 is most likely an erroneous DOP because it overloads the database without any advantages.
here it is used to split the minimum number of asynchronous tasks
Another terrible feature of Parallel.For . On computers with a low processor, it can appear on small tasks. Awful API. Do not use it with IO. (I use AsParallel , which allows you to set the exact DOP, not the maximum DOP.)
because I want as many IO-based database queries as possible
Why? Bad plan.
Btw, the method you posted here is good, and I also use this. I would like it to be within the framework. This exact method is the answer to the question of 10 SO per week ("How can I asynchronously process 100,000 elements in parallel?").
source share