I am creating an image processing application in C # and am trying to use my multi-core system while doing the bulk of the processing in parallel. In particular, I have a Parallel.For loop that goes through each pixel and applies a Gaussian filter. Just by doing the calculations, I see significant acceleration, as expected, but the problem arises when trying to save the results of the calculations. The code looks something like this:
Parallel.For(// various loop parameters { // processing and number crunching // significant speed up here . . . bitmap.SetPixel(x, y, value); });
This gives a runtime error because several streams end up trying to write the bitmap object at the same time. I locked the object as follows:
lock(bitmap) bitmap.SetPixel(x, y, value);
But it ends slower than the production version. After that, it turns out that the SetPixel call takes about 90% of the execution time for each iteration of the loop, which means that individual threads spent most of their time waiting for a lock on the bitmap object, and the added overhead from several threads slowed it down.
So I want to know if there is a way to write multiple threads to the same object at the same time? I am careful that each thread works with a different part of the image, so no race condition will be introduced by this. Is there a way to redefine the error that is being thrown and say: "I know what I'm doing is unsafe, but I want to do it anyway?" Something like the Concurrency :: combinable class seems to do the trick, but it is only in C ++.
source share