Here is a C # code snippet that applies an operation to each row of the doubling matrix (suppose 200x200).
For (int i = 0; i < 200; i++)
{
result = process(row[i]);
DoSomething(result);
}
The process is a static method, I have a Corei5 processor and Windows XP, and I am using .Net Framework 3.5. To improve performance, I tried to process each line using a separate thread (using asynchronous delegates). So I rewrote the code as follows:
List<Func<double[], double>> myMethodList = new List<Func<double[], double>>();
List<IAsyncResult> myCookieList = new List<IAsyncResult>();
for (int i = 0; i < 200; i++)
{
Func<double[], double> myMethod = process;
IAsyncResult myCookie = myMethod.BeginInvoke(row[i], null, null);
myMethodList.Add(myMethod);
myCookieList.Add(myCookie);
}
for (int j = 0; j < 200; j++)
{
result = myMethodList[j].EndInvoke(myCookieList[j]);
DoSomething(result);
}
This code is called for 1000 matrices in a single pass. When I tested, suddenly I did not get any performance improvement! Therefore, this question arose for me: in what cases will multithreading be useful for increasing productivity, and is my code logical as well?