I suggest a more rigorous performance test. Put your test in a named function to allow MATLAB to optimize both code fragments and run both codes several times, choosing the fastest execution time. My guess is that they should take as much time, although I can’t check right now with reasonable matrix sizes. Here is what I will do:
function product_timing(N) a=rand(N); b=rand(N); tmin=inf; for k=1:10 tic; res1=2*(a<b); t=toc; if t<tmin tmin=t; end end disp(tmin); tmin=inf; for k=1:10 tic; res2=2.*(a<b); t=toc; if t<tmin tmin=t; end end
Update
On my R2012b there is no noticeable difference between the two methods. However, as others have pointed out, R2015b with its new execution engine does its best.
While I'm not sure about the answer, let me get feedback from @ x1hgg1x (comments on both this question and the question) and @LuisMendo ( chatting ), just to clarify my ignorance:
c*3.56 - integer coefficient (number of threads?) times slower than c.*3.56 (with any scalar) if c is logical , but not if c is uint8 or double- the same is true for vectors, not just square matrices
As indicated on the MATLAB product page :
Run programs faster with the advanced MATLAB® execution engine.
Improved architecture uses time-based compilation (JIT) MATLAB code with one execution path. The engine offers improved language quality and provides a platform for future improvement.
Specific performance improvements include those made for:
...
Elementary mathematical operations
The performance of many mathematical operations has been optimized. These operations are elementary arithmetic operations on arrays, such as as shown below:
>> b = ((a+1).*a)./(5-a);
However, looking at the documents .* And * , I do not see too much information regarding the problem. Note from array versus operations with matrices regarding array operations such as .* :
If one operand is a scalar and the other is not, then a MATLAB scalar is applied to each element of the other operand. This property is known as scalar expansion, because the scalar expands into an array of the same size as the other input, then the operation is performed, usually it has two arrays.
And the doc matrix product * says
If at least one input is scalar, then A * B is equivalent to A. * B and is commutative.
As we see, the equivalence of A*B and A.*B debatable. Well, they are mathematically equivalent, but something strange is happening.
Due to the above notes and the fact that performance differences only occur for logical arrays, I would consider this undocumented function. I would think that this is due to logical , occupying only 1 byte, but acceleration does not show up with uint8 . I believe that since logical does contain information in one bit, some internal optimization is possible. This still does not explain why mtimes does not, and it is certainly related to the internal works of times vs mtimes .
One thing is certain: times does not actually return to mtimes for scalar operands (maybe it should?). Since R2012b does not have the full effect, I believe that the optimized operations with arrays of the new execution mechanism mentioned above process logical arrays separately, allowing the special case scalar.*logical_array , but the same optimization is absent in mtimes .