Reading Stanley Lippman's "C ++ Primer", I learned that by default decimal integer literals are signed (the smallest type is int , long or long long , in which the literal value matches), while octal and hexadecimal literals can be either signed or not sign (the smallest type is int , unsigned int , long , unsigned long , long long or unsigned long long , in which the literal value is suitable).
What is the reason for handling these literals in different ways?
Edit: I am trying to provide some context
int main() { auto dec = 4294967295; auto hex = 0xFFFFFFFF; return 0; }
Debugging the following code in Visual Studio shows that the dec type is unsigned long and that the hex type is unsigned int .
This is contrary to what I read, but still: both variables represent the same value, but have different types. It bothers me.
source share