I have a project where I deal with large numbers (ns-timestamps) that do not fit into an integer. So I want to use, for example, int64_t, and I'm currently writing a test case (yes!).
To test the behavior for a large number, I started with something like
int64_t val = 2*std::numeric_limits<int>::max(); qDebug() << "long val" << val;
which returns
long val -2
(just as if I were defining val as int).
But if I write
int64_t val = std::numeric_limits<int>::max(); val *= 2; qDebug() << "long val" << val;
I get
long val 4294967294
which looks right.
So, for me, it looks as if 2*max() first stored in integer (truncated at this step), and then copied to int64 . Why is this happening? The compiler knows that the result is of type int64 , so it must match 2*max() directly.
source share