Why in Matlab. * Operator faster than * for scalar in some case?

Consider the following code:

a=rand(10000); b=rand(10000); tic; 2*(a<b); toc; tic; 2.*(a<b); toc; 

Result:

 Elapsed time is 0.938957 seconds. Elapsed time is 0.426517 seconds. 

Why is the second case twice as fast as the first case?

Edit: I get the same result with any matrix size, no matter what order you check, with

 (a<b).*3.56 vs (a<b)*3.56 

for example, but not with

 (a.*b)*2 vs (a.*b).*2 

or

 (a*b)*2 vs (a*b).*2 

There seems to be a link to the boolean array, because I have the same result with

 (a&b)*2 vs (a&b).*2 

Computer: R2015b, Windows 10 x64

+5
source share
3 answers

I suggest a more rigorous performance test. Put your test in a named function to allow MATLAB to optimize both code fragments and run both codes several times, choosing the fastest execution time. My guess is that they should take as much time, although I can’t check right now with reasonable matrix sizes. Here is what I will do:

 function product_timing(N) a=rand(N); b=rand(N); tmin=inf; for k=1:10 tic; res1=2*(a<b); t=toc; if t<tmin tmin=t; end end disp(tmin); tmin=inf; for k=1:10 tic; res2=2.*(a<b); t=toc; if t<tmin tmin=t; end end 

Update

On my R2012b there is no noticeable difference between the two methods. However, as others have pointed out, R2015b with its new execution engine does its best.

While I'm not sure about the answer, let me get feedback from @ x1hgg1x (comments on both this question and the question) and @LuisMendo ( chatting ), just to clarify my ignorance:

  • c*3.56 - integer coefficient (number of threads?) times slower than c.*3.56 (with any scalar) if c is logical , but not if c is uint8 or double
  • the same is true for vectors, not just square matrices

As indicated on the MATLAB product page :

Run programs faster with the advanced MATLAB® execution engine.

Improved architecture uses time-based compilation (JIT) MATLAB code with one execution path. The engine offers improved language quality and provides a platform for future improvement.

Specific performance improvements include those made for:

...

Elementary mathematical operations

The performance of many mathematical operations has been optimized. These operations are elementary arithmetic operations on arrays, such as as shown below:

>> b = ((a+1).*a)./(5-a);

However, looking at the documents .* And * , I do not see too much information regarding the problem. Note from array versus operations with matrices regarding array operations such as .* :

If one operand is a scalar and the other is not, then a MATLAB scalar is applied to each element of the other operand. This property is known as scalar expansion, because the scalar expands into an array of the same size as the other input, then the operation is performed, usually it has two arrays.

And the doc matrix product * says

If at least one input is scalar, then A * B is equivalent to A. * B and is commutative.

As we see, the equivalence of A*B and A.*B debatable. Well, they are mathematically equivalent, but something strange is happening.

Due to the above notes and the fact that performance differences only occur for logical arrays, I would consider this undocumented function. I would think that this is due to logical , occupying only 1 byte, but acceleration does not show up with uint8 . I believe that since logical does contain information in one bit, some internal optimization is possible. This still does not explain why mtimes does not, and it is certainly related to the internal works of times vs mtimes .

One thing is certain: times does not actually return to mtimes for scalar operands (maybe it should?). Since R2012b does not have the full effect, I believe that the optimized operations with arrays of the new execution mechanism mentioned above process logical arrays separately, allowing the special case scalar.*logical_array , but the same optimization is absent in mtimes .

+6
source

For the background, the * operator is a matrix operator, and *. is an elementary operator (see http://www.mathworks.com/help/matlab/matlab_prog/array-vs-matrix-operations.html ).

In your test, a and b are random 1000x1000 matrices that are evaluated using the 1000x1000 logic matrix that you want to scale using these two approaches. Besides the fact that the Mathworks developer tells us what is happening, I think we can only reflect on what is happening to make a difference (and answer your question).

Since we should not reflect on these answers, I will officially stay here. However, it’s interesting that you stumbled anyway.

So, unofficially, I suspect that you have found some additional overhead that MATLAB implements to handle matrix operations with the * operator, which is shorted or bypassed in the element operator.

Consider the following

 c = a<b; tic; d*(c); toc; % case 1 tick;d.*(c); toc; % case 2 

where a and b defined by your code above, and d remains as an unknown value for explanation, further.

When multiplying matrices, the number of columns in the first matrix should correspond to the number of rows in the second matrix. c will be a 1000x1000 matrix, so d must have 1000 columns (e.g. size(d,2)==1000 ), or it must be a scalar. In the second case, d must be a scalar (or an error will be thrown).

In addition, there may be some additional preparation (a little, but some) for how you are going to order your matrices when multiplying, so that you can correctly get your amounts for each place in the final product. Here we know that d==2 and this is a scalar, so multiplications can be done in place. However, we know this because we see it. I do not think that the multiplication algorithm accepts this as an angular case here - to see that d is a scalar value. If so, he should / should just call the procedure *. . And maybe this is what happens, and we just get some overhead at the stack level. Unofficially, of course.

0
source

Yes, ". *" Is faster by 2015b between the scalar and the logical array:

 a = rand(10000); b = rand(10000); timeit(@()2*a) timeit(@()2.*a) timeit(@()2.*(a>b)) timeit(@()2*double(a>b)) timeit(@()2*(a>b)) 
-1
source

Source: https://habr.com/ru/post/1240773/


All Articles