Why two decimal places with the same value, formatted differently

I have two small pieces of code. In my opinion, they should lead to the same line, but they do not:

(1.23M * 100M).ToString() 

leads to:

 123,00 

and

 (123M).ToString() 

leads to:

 123 

My very simple question is: can someone explain to me why this (strange?) Behavior is happening?

+4
source share
2 answers

These are two different values, bitwise. Unlike double , decimal does not automatically normalize - it looks like it saved information that at some point you had two decimal places. You can see exactly the same difference without multiplication:

 Console.WriteLine(123m) Console.WriteLine(123.00m); 

The documentation is somewhat unclear (from what I see) about exactly how the result of operations on decimal values ​​is performed, in terms of the number of decimal places. (I would not be surprised to know that it is somewhere standardized ...)

+3
source

The decimal type is represented by an integer scaled 10 times. From the documentation for decimal :

The scale factor also stores any trailing zeros in decimal. Trailing zeros do not affect the decimal value in arithmetic or comparative operations. However, trailing zeros can be detected using the ToString method if the appropriate format string is used.

Using GetBits , you can see that 123.00M is represented as 12300/10 2 and 123M is 123/10 0 .

Edit

I took a simple program that demonstrates the problem:

 class Program { static void Main(string[] args) { Console.WriteLine((1.23M * 100M).ToString()); Console.WriteLine((123M).ToString()); } } 

I looked at the generated IL:

 .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 51 (0x33) .maxstack 6 .locals init ([0] valuetype [mscorlib]System.Decimal CS$0$0000) IL_0000: nop IL_0001: ldc.i4 0x300c IL_0006: ldc.i4.0 IL_0007: ldc.i4.0 IL_0008: ldc.i4.0 IL_0009: ldc.i4.2 IL_000a: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, uint8) IL_000f: stloc.0 IL_0010: ldloca.s CS$0$0000 IL_0012: call instance string [mscorlib]System.Decimal::ToString() IL_0017: call void [mscorlib]System.Console::WriteLine(string) IL_001c: nop IL_001d: ldc.i4.s 123 IL_001f: newobj instance void [mscorlib]System.Decimal::.ctor(int32) IL_0024: stloc.0 IL_0025: ldloca.s CS$0$0000 IL_0027: call instance string [mscorlib]System.Decimal::ToString() IL_002c: call void [mscorlib]System.Console::WriteLine(string) IL_0031: nop IL_0032: ret } // end of method Program::Main 

We see that the compiler actually optimized the multiplication and inserted the construct call for one case with a decimal point. Two instances use different views. This is basically what I described above.

+4
source

Source: https://habr.com/ru/post/1490172/


All Articles