Of course, it's just a matter of understanding that none of 0.6, 0.6f, 0.7, and 0.7f is an accurate value. They are the closest represented approximations of the corresponding type. The exact values ββthat are stored for these 4 values ββare:
0.6f => 0.60000002384185791015625 0.6 => 0.59999999999999997779553950749686919152736663818359375 0.7f => 0.699999988079071044921875 0.7 => 0.6999999999999999555910790149937383830547332763671875
With this information, he clearly explains why you get results.
To think of it differently, imagine that you have two floating point decimal numbers, one with four precision digits and one with 8 precision digits. Now let's see how 1/3 and 2/3 will be presented:
1/3, 4dp => 0.3333 1/3, 8dp => 0.33333333 2/3, 4dp => 0.6667 2/3, 8dp => 0.66666667
Thus, in this case, the value of lower accuracy is less than that of higher accuracy for 1/3, but this changes by 2/3. This is the same for float and double , only in binary format.
source share