Casting to a decimal value exceeded precision in C #

In C # 4.0, the following cast behaves very unexpectedly:

(decimal)1056964.63f 1056965 

Excellent performance for two-line work:

 (double)1056964.63f 1056964.625 (decimal)(double)1056964.63f 1056964.625 

Is it for design?

+4
source share
2 answers

The problem is your initial value - the float is exactly 7 significant decimal places:

 float f = 1056964.63f; Console.WriteLine(f); // Prints 1056965 

So in reality the second example is unexpected.

Now the exact value in f is 1056965.625, but the value set for all values ​​from 1056964.563 to 1056964.687 is therefore not even the β€œ.6” part is always correct. This is why the docs for System.Single state:

By default, a single value contains only 7 decimal digits of precision, although no more than 9 digits are supported inside it.

Additional information is saved when you convert to double , because it can save it without "interpreting" it at all - where to convert it to decimal form (either for printing or for type decimal ) through code that knows that it cannot "trust "these last two digits.

+10
source

This is by design. Float may keep your [edit] number pretty accurate [/ edit], but for conversion purposes, it rounds it to the nearest integer, because of your number and integer (1056964.75 and 1056964.88), there are only a few floating point values ​​displayed. See COMNumber :: FormatSingle and COMDecimal :: InitSingle from SSCLI.

-1
source

Source: https://habr.com/ru/post/1383276/


All Articles