It is mathematically shown that 0.9 repetitions can be equal to 1. However, this question is not related to infinity, convergence, or the mathematics behind it.
The above assumption can be represented using doubles in C # with the following.
var oneOverNine = 1d / 9d; var resultTimesNine = oneOverNine * 9d;
Using the code above, (resultTimesNine == 1d) is true.
When using decimal places, evaluation gives a lie, but my question is not about the different precision of double and decimal places.
Since no type has infinite precision, how and why does the double maintain such equality when the decimal value does not matter? What happens literally “between the lines” of the code above regarding how the oneOverNine variable is stored in memory?
source share