Many languages ββ(starting at least until C. I am not familiar with the older ones) seem to have an agreement that integer literals can be written on the basis of 3 bases with the following notation:
0xor 0xbase-16 number prefix (hex)0 base number prefix 8 (octal)- No prefix means base-10 (decimal) number
I understand the use of hexadecimal and decimal places. They are used universally, have various use cases where they make the code more readable (bitwise andwith 0x20000much clearer than with 131072), and it should not be confused (a number with the symbol 'x' or 'X' init does not make sense as a decimal number).
However, octal literals seem like a mistake to me. First of all, I have never seen them used, and it seems that I canβt imagine one use case where it will make the code more readable in order to use it in hex or decimal format. In addition, they can be confused with decimal numbers ( 010- this is a great decimal number without limiting the programming language).
So, do you use octal literals, are they some form of historical residual baggage from some time when they were useful, or is it just a mistake?
source
share