Is there a more efficient way to convert double to float?

I need to convert a multidimensional double array to an array with an uneven float. Sizes will vary from [2] [5] to [6] [1024].

I was curious how the loop will be executed and the double is flushed to the float, and this is not too bad, about 225 µs for the array [2] [5] - here is the code:

const int count = 5; const int numCh = 2; double[,] dbl = new double[numCh, count]; float[][] flt = new float[numCh][]; for (int i = 0; i < numCh; i++) { flt[i] = new float[count]; for (int j = 0; j < count; j++) { flt[i][j] = (float)dbl[i, j]; } } 

However, if there are more effective methods, I would like to use them. I should mention that I ONLY timed two nested loops, not distributions before it.

After several experiments, I think that 99% of the time is burned in loops, even without a destination!

+6
source share
4 answers

This will work faster, for small data that Parallel.For(0, count, (j) => should not do Parallel.For(0, count, (j) => , in fact it works much slower for very small data, so I commented on this section.

 double* dp0; float* fp0; fixed (double* dp1 = dbl) { dp0 = dp1; float[] newFlt = new float[count]; fixed (float* fp1 = newFlt) { fp0 = fp1; for (int i = 0; i < numCh; i++) { //Parallel.For(0, count, (j) => for (int j = 0; j < count; j++) { fp0[j] = (float)dp0[i * count + j]; } //}); flt[i] = newFlt.Clone() as float[]; } } } 

This works faster because double access to double arrays [,] really taxed in .NET due to checking the boundaries of the array. newFlt.Clone() simply means that we do not newFlt.Clone() or untie new pointers all the time (since there is little overhead at the same time)

You will need to run it with the unsafe tag and compile with /UNSAFE

But in fact, you should work with data closer to 5000 x 5000 not 5 x 2, if something takes less than 1000 ms, you need to either add more cycles or increase data, because at this level a slight surge in processor activity can add a lot of noise in profiling.

+6
source

In your example - I think that you do not measure the double / float comparison so much (which should be an internal processor instruction) as you access the array (which has many redirects plus the obvious .... aray delimiter checks (for exception of the array index restrictions).

I would suggest testing without arrays.

0
source

If you could also use lists in your case, you could use the LINQ approach:

 List<List<double>> t = new List<List<double>>(); //adding test data t.Add(new List<double>() { 12343, 345, 3, 23, 2, 1 }); t.Add(new List<double>() { 43, 123, 3, 54, 233, 1 }); //creating target List<List<float>> q; //conversion q = t.ConvertAll<List<float>>( (List<double> inList) => { return inList.ConvertAll<float>((double inValue) => { return (float)inValue; }); } ); 

if its faster you have to measure. (Doubtful) but you can parallelize it, which could fix it (PLINQ)

0
source

I really don’t think you can optimize your code much more, one option is to make your code parallel, but for your input size ([2] [5] to [6] [1024]) I don’t want you to You won a lot if you had any profit. In fact, I would not even optimize this part of the code ...

Anyway, to optimize this, the only thing I would do (if it fits in with what you want to do) is just use fixed-width arrays instead of jagged ones, even if you waste memory on it.

0
source

Source: https://habr.com/ru/post/898739/


All Articles