I want to effectively ensure that the decimal value has at least N (= 3 in the example below) before performing arithmetic operations.
Obviouly I could format with "0.000######....#"
, then parse, but it is relatively inefficient, and I'm looking for a solution that avoids converting to / from string.
I tried the following solution:
decimal d = 1.23M;
d = d + 1.000M - 1;
Console.WriteLine("Result = " + d.ToString());
which seems to work for all <= values Decimal.MaxValue - 1
when compiling using Visual Studio 2015 in both Debug and Release builds.
But I have a suspicion that compilers are allowed to optimize (1.000 - 1). Is there anything in the C # specification that guarantees that this will always work?
Or is there a better solution, for example. using Decimal.GetBits
?
UPDATE
Following John Skeet's answer, I was already trying to add 0.000M
, but this did not work on dotnetfiddle. Therefore, I was surprised to see what Decimal.Add(d, 0.000M)
works. Here dotnetfiddle compares d + 000M
and decimal.Add(d,0.000M)
: the results are different from dotnetfiddle, but they are identical when the same code is compiled using Visual Studio 2015:
decimal d = 1.23M;
decimal r1 = decimal.Add(d, 0.000M);
decimal r2 = d + 0.000M;
Console.WriteLine("Result1 = " + r1.ToString());
Console.WriteLine("Result2 = " + r2.ToString());
Thus, at least some behavior seems to depend on the compiler, which is not reassuring.
source
share