The decimal type is represented by an integer scaled 10 times. From the documentation for decimal :
The scale factor also stores any trailing zeros in decimal. Trailing zeros do not affect the decimal value in arithmetic or comparative operations. However, trailing zeros can be detected using the ToString method if the appropriate format string is used.
Using GetBits , you can see that 123.00M is represented as 12300/10 2 and 123M is 123/10 0 .
Edit
I took a simple program that demonstrates the problem:
class Program { static void Main(string[] args) { Console.WriteLine((1.23M * 100M).ToString()); Console.WriteLine((123M).ToString()); } }
I looked at the generated IL:
.method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 51 (0x33) .maxstack 6 .locals init ([0] valuetype [mscorlib]System.Decimal CS$0$0000) IL_0000: nop IL_0001: ldc.i4 0x300c IL_0006: ldc.i4.0 IL_0007: ldc.i4.0 IL_0008: ldc.i4.0 IL_0009: ldc.i4.2 IL_000a: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, uint8) IL_000f: stloc.0 IL_0010: ldloca.s CS$0$0000 IL_0012: call instance string [mscorlib]System.Decimal::ToString() IL_0017: call void [mscorlib]System.Console::WriteLine(string) IL_001c: nop IL_001d: ldc.i4.s 123 IL_001f: newobj instance void [mscorlib]System.Decimal::.ctor(int32) IL_0024: stloc.0 IL_0025: ldloca.s CS$0$0000 IL_0027: call instance string [mscorlib]System.Decimal::ToString() IL_002c: call void [mscorlib]System.Console::WriteLine(string) IL_0031: nop IL_0032: ret } // end of method Program::Main
We see that the compiler actually optimized the multiplication and inserted the construct call for one case with a decimal point. Two instances use different views. This is basically what I described above.
source share