Floating point numbers in most architectures (which use the IEEE754 representation ) can only represent numbers that have a finite binary extension , i.e. precisely represented by a number like 11.00100100001 (and the length of the string is limited by the size of the floating point type, for example 53 for double ).
Any number that does not belong to this form, i.e. is not a finite sum of powers of two, for example 1/3 or 1/5 or 1/10, it can be expressed exactly never with the help of such a variable with a floating point.
Since users often enter values, such as 0.1 , and not more than apt 0.125 , this loss of accuracy often occurs quite early in settings like yours. Multiplication by this constant is one of the ways that the author on his platform found closer to what, in his opinion, was intended for the user. All this is subjective. If you just print with short precision, printf("%0.5f", x) , you should not notice a lack of precision.
source share