Computers represent floating point numbers in binary format. Decimal numbers 0.6 and 0.1 do not have an exact binary representation, and the number of bits used to represent them is, of course. The result would be a truncation, the effect of which is observed during division. The result of this separation is not exactly 6.00000000, but perhaps 5.99999999, which then truncates to 5.
source share