Floating-point numbers are stored in x86 / x64 processors in base 2 and not in base 10: https://en.wikipedia.org/wiki/Double-precision_floating-point_format . Because of this, many decimal floating point numbers cannot be represented exactly, for example, the decimal digit 0.1 can be represented as something like 0.1000000000000003 or 0.0999999999999997 - regardless of whether the base 2 representation is close enough to decimal 0 ,1. Due to this inaccuracy, for example. printing in decimal format and then parsing a floating-point number can lead to a slightly different value than the one stored in binary memory before printing.
For some applications, the appearance of such errors is unacceptable: they want to analyze exactly the same binary floating-point number that was before printing (for example, one application exports floating-point data and another import). To do this, you could export and import doublings in hexadecimal format. Since 16 is a power of 2, binary floating point numbers can be represented exactly in hexadecimal.
printf and scanf have been extended with the %a format specifier, which allows you to print and parse hexadecimal floating point numbers. Although MSVC ++ does not support the %a format specifier for scanf :
Qualifiers a and A (see printfonts for type fields) are not available with scanf.
To print double with full precision with the hexadecimal format, you should specify the print of 13 hexadecimal digits after the point, which correspond to 13 * 4 = 52 bits:
double x = 0.1; printf("%.13a", x);
Read more about hexadecimal floating point with code and examples (note that at least for MSVC ++ 2013, the simple specification %a in printf prints 6 hexadecimal digits after the period, not 13 - this is indicated at the end of the article).
In particular, for constants, as asked in the question, hexadecimal constants can be convenient for testing the application on exact hard-coded floating-point inputs. For instance. your error may be reproducible for 0.1000000000000003, but not for 0.0999999999999997, so you need a hexadecimal hard-coded value to indicate the representation of interest for the decimal number 0.1.