Recently, during a profiling session, I came across one method that, in the decompiled version of the profiler, looked like this:
public static double dec2f(Decimal value)
{
if (value == new Decimal(-1, -1, -1, true, (byte) 0))
return double.MinValue;
try
{
return (double) value;
}
catch
{
return double.MinValue;
}
}
This is part of legacy code written many years ago, and according to the profiler (which was in fetch mode), this method took too much time. In my opinion, this is because the try-catch block prevents inlining, and I took some time to update my memories of the decimal value for double conversion tricks. After this conversion cannot be selected, I deleted the try-catch block.
But when I looked again at the simplified version of the source code, I wondered why the decompiled version shows the weird decimal constuctor, while the simple version is Decimal.MinValue:
public static double dec2f(decimal value)
{
if (value == DecimalMinValue)
{
return Double.MinValue;
}
return (double)value;
}
-, , , , , IL-:
public static double dec2f(Decimal value)
{
if (value == new Decimal(-1, -1, -1, true, (byte) 0))
.maxstack 6
.locals init (
[0] float64 V_0
)
IL_0000: ldarg.0
IL_0001: ldc.i4.m1
IL_0002: ldc.i4.m1
IL_0003: ldc.i4.m1
IL_0004: ldc.i4.1
IL_0005: ldc.i4.0
IL_0006: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, unsigned int8)
IL_000b: call bool [mscorlib]System.Decimal::op_Equality(valuetype [mscorlib]System.Decimal, valuetype [mscorlib]System.Decimal)
IL_0010: brfalse.s IL_001c
return double.MinValue;
, , , Decimal , , Decimal.MinValue! , : , Decimal.MinValue , :
static readonly decimal DecimalMinValue = Decimal.MinValue;
, - , :
Method | Mean | StdDev |
--------------------------- |------------ |---------- |
CompareWithDecimalMinValue | 178.4235 ns | 0.4395 ns |
CompareWithLocalMinValue | 98.0991 ns | 2.2803 ns |
DecimalConstantAttribute, , , .
, , Decimal- , "", , ... ? - - ++?