Whether you believe it or not, this is intended behavior and conforms to some IEEE standard.
It is not possible to imagine an analog daily value, such as a massive number or a small fraction, with full precision in a single binary representation. Floating-point numbers in .NET, such as float or double, do everything possible to minimize the error when assigning numbers to them, so when you assigned the value 0.2 to a variable, the language did everything possible to select the representation with the least error.
Not that the number has somehow worsened in memory - it is a deliberate step. If you are comparing floating point numbers, you should always allow the region on either side of your comparison to be acceptable. Your representation of 0.2 is close to a very large number of decimal places. How much is enough for your application? It looks blatant to your eyes, but itโs actually a very small mistake. When comparing doubles and floats (with integers or with each other), you should always consider acceptable accuracy and accept a range on both sides of the expected result.
You can also use other types, such as decimal , which has extremely good decimal precision - but is also very large compared to floats and doubles.
source share