I noticed this by accident once, and now decided to thoroughly test it.
So when I call the function:
#define Type int #define Prm const Type & Type testfunc1(Prm v1, Prm v2, Prm v3, Prm v4, Prm v5, Prm v6, Prm v7, Prm v8, Prm v9, Prm v10){ return (v1|v2|v3|v4|v5|v6|v7|v8|v9|v10); }
100 million times:
for(Type y = 0; y < 10000; y++){ for(Type x = 0; x < 10000; x++){ out |= testfunc1(x,y,x,x,y,y,x,y,x,y); } }
With types int , const int and const int & I noticed that const int faster than const int & . (Note: im uses the return value to ensure that the function will not be optimized).
Why is that? I always thought that adding & would actually make it faster, but tests say otherwise. I know that for larger data types this will probably be a different result, I have not tested them, since I am quite confident in the results.
My tests:
const int: 7.95s const int &: 10.2s
Edit: I think this is really due to my architecture; I tested with a Sint64 type and the results were:
const Sint64: 17.5s const Sint64 &: 16.2s
Edit2: Or is it? Tested with type double (which is 64 bit?), And the results will puzzle me:
const double: 11.28s const double &: 12.34s
Edit3: updated the loop code to match my latest tests with 64-bit types.