Why 0.9 always repeats 1

It is mathematically shown that 0.9 repetitions can be equal to 1. However, this question is not related to infinity, convergence, or the mathematics behind it.

The above assumption can be represented using doubles in C # with the following.

var oneOverNine = 1d / 9d; var resultTimesNine = oneOverNine * 9d; 

Using the code above, (resultTimesNine == 1d) is true.

When using decimal places, evaluation gives a lie, but my question is not about the different precision of double and decimal places.

Since no type has infinite precision, how and why does the double maintain such equality when the decimal value does not matter? What happens literally “between the lines” of the code above regarding how the oneOverNine variable is stored in memory?

+6
source share
2 answers

It depends on the rounding used to get the closest represented value to 1/9. It can go anyway. You can explore the issue of representability on Rob Kennedy's useful page: http://pages.cs.wisc.edu/~rkennedy/exact-float

But do not think that in some way a double can achieve accuracy. This is not true. If you try 2/9, 3/9, etc., you will find cases where rounding goes the other way. The bottom line is that 1/9 is not exactly representable in binary floating point. And so rounding occurs, and your calculations are subject to rounding errors.

+11
source

What happens literally “between the lines” of the code above regarding how oneOverNine is stored in memory?

What are you asking about is called IEEE 754 . It is a specification that C #, the foundation of .Net runtime and most other software platforms, is used to store and process decimal values. This is due to the fact that IEEE 754 support is usually implemented directly at the CPU / chipset level, which makes it much more efficient than an alternative implemented exclusively in software and much easier when creating compilers, since operations will be displayed almost directly on certain CPU commands .

+2
source

Source: https://habr.com/ru/post/956197/


All Articles