Why are some floating point numbers accurately represented in C #?

Inspired by this question , the following does not do what I expect from it:

float myFloat = 0.6;
Console.WriteLine(myFloat);
// Output: 0.6

I would expect the above value to be printed 0.60000002384185791(floating point representation 0.6) - obviously, there is some mechanism here that does this work when it really shouldn't (although, as you can see from the related question, it sometimes doesn't works)

What is this mechanism and how does it work?

+3
source share
3 answers

Console.WriteLine, , ToString FormatProvider. , , - , .

, , , Console.WriteLine .

+4

, WriteLine, float, ...

+3

0.6 IEEE754. 0,599999964237213134765625 (0x3f199999) 0,600000083446502685546875 (0x3f19999b). 0.60000002384185791015625 (0x3f19999a) , WriteLine.

You need to either use the floating point representation with higher precision (double), or limit the number of decimal places that WriteLine prints:

float f = 0.6; Console.WriteLine("{0:N6}", f);
+1
source

Source: https://habr.com/ru/post/1765563/


All Articles