How does the decimal fraction work?

I looked at the decimal in C #, but I was not 100% sure what it did. Is it a loss? in a C # record, the 1.0000000000001f+1.0000000000001fresult 2when using float( doublegets you 2.0000000000002what is right) is it possible to add two things with a decimal point and not get the correct answer?

How many decimal places can i use? I see that MaxValue 79228162514264337593543950335, but if I subtract 1, how many decimal places can I use?

Are there any quirks I should know about? In C # its 128 bits, in another language, how many bits is this, and will it work the same as C # decimal? (when adding, dividing, multiplying)

+3
source share
3 answers

, , decimal - it float. . f float, aka System.Single. m decimal, aka System.Decimal. , , decimal, float, .

1.0000000000001m + 1.0000000000001m, . , double .

.NET, , :

  • (float/double)
  • ()

, , , , , , 0,1. - , 28/29 , , , 1 3, , .

, decimal double. , 28-29 , (, 10 200) (, 10 -200).

+12

() 100% . , , , , , 0 100 ( )

-3

. . - . msdn, , "28-29 ". .net . , .net.


edit (in response to Jon Skeet): If you initialize the Decimal class with numbers above that are less than 28 digits after the decimal point, the number will be stored correctly until the binary representation is accurate. Since it works in 64-bit format, I assume that 128-bit will handle it perfectly. Some numbers, such as 0.1, will never be accurately represented, as they are a repeating sequence in binary format.

-3
source

Source: https://habr.com/ru/post/1793137/


All Articles