Actually, mathematically determine the accuracy and scale of the decimal value

I was looking for a way to determine the scale and accuracy of the decimal in C #, which led me to several SO questions, but none of them seem to have the correct answers or the headings are misleading (they really refer to SQL server or some other db, not C #), or any answers at all. The following post, I think, is closest to what I need, but even this seems wrong:

Determine the decimal precision of the input number

First, there seems to be some confusion about the difference between scale and accuracy. For Google (for MSDN):

"Accuracy is the number of digits in a number. Scaling is the number of digits to the right of the decimal point in a number."

With that said, the number 12345.67890M will have a scale of 5 and an accuracy of 10. I have not found a single example of code that would accurately calculate this in C #.

I want to make two helper methods decimal.Scale() and decimal.Precision() so that the next unit test passes:

  [TestMethod] public void ScaleAndPrecisionTest() { //arrange var number = 12345.67890M; //act var scale = number.Scale(); var precision = number.Precision(); //assert Assert.IsTrue(precision == 10); Assert.IsTrue(scale == 5); } 

... but I still need to find a fragment that will do this, although several people suggested using decimal.GetBits() , while others said they convert it to a string and parse it.

Converting it to a string and parsing is, in my opinion, a terrible idea, even ignoring the problem of localization with a decimal point. However, the math behind the GetBits() method is like me in Greek.

Can someone describe what the calculations will look like to determine the scale and accuracy in the decimal value for C #?

+5
source share
4 answers

Here's how you get scale using the GetBits() function:

 decimal x = 12345.67890M; int[] bits = decimal.GetBits(x); byte scale = (byte) ((bits[3] >> 16) & 0x7F); 

And the best way I can get accuracy is to remove the fraction point (i.e. use the Decimal Constructor to recover the decimal fraction number without the scale mentioned above), and then use the logarithm:

 decimal x = 12345.67890M; int[] bits = decimal.GetBits(x); //We will use false for the sign (false = positive), because we don't care about it. //We will use 0 for the last argument instead of bits[3] to eliminate the fraction point. decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0); int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1; 

Now we can put them in extensions:

 public static class Extensions{ public static int GetScale(this decimal value){ if(value == 0) return 0; int[] bits = decimal.GetBits(value); return (int) ((bits[3] >> 16) & 0x7F); } public static int GetPrecision(this decimal value){ if(value == 0) return 0; int[] bits = decimal.GetBits(value); //We will use false for the sign (false = positive), because we don't care about it. //We will use 0 for the last argument instead of bits[3] to eliminate the fraction point. decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0); return (int)Math.Floor(Math.Log10((double)d)) + 1; } } 

And here is the fiddle .

+3
source

First of all, solve the “physical” problem: how do you decide which numbers are significant. The fact is, "precision" has no physical meaning if you do not know or do not know the absolute error .


Now there are two main ways to determine each digit (and therefore their number):

  • get + interpret meaningful parts
  • mathematically calculate

The 2nd method cannot detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I will not cover it unless requested.

For the first, in the Decimal interface , I see two main methods for getting the details: ToString() (a few overloads) and GetBits() .

  • ToString(String, IFormatInfo) is actually a reliable way, since you can pinpoint the format.

  • The semantics of the GetBits() result GetBits() clearly documented in the MSDN article (it laments how “this is Greek for me” it won't;)). Decompiling with ILSpy shows that this is actually a tuple of raw object data fields:

     public static int[] GetBits(decimal d) { return new int[] { d.lo, d.mid, d.hi, d.flags }; } 

    And their semantics:

    • |high|mid|low| - binary digits (96 bits) interpreted as an integer (= right justified)
    • flags :
      • bits 16 to 23 - "power 10 for integer division" (= number of fractional decimal digits)
        • (thus (flags>>16)&0xFF is the original value of this field)
      • bit 31 - sign (does not concern us)

    as you can see, this is very similar to the IEEE 754 floats .

    Thus, the number of fractional digits is an exponent . number of digits the number of digits in decimal representation of a 96-bit integer .

+1
source

Racil's answer gives you a decimal scale internal value that is correct, although if the internal representation ever changes, it will be interesting.

In the current format, the precision part of decimal is fixed at 96 bits, which is from 28 to 29 decimal digits depending on the number. All .NET decimal values ​​share this precision. Since this is a constant, there is no internal value that you can use to determine it.

What you, apparently, after this is the number of digits that we can easily determine from the string representation. We can also get the scale at the same time, or at least using the same method.

 public struct DecimalInfo { public int Scale; public int Length; public override string ToString() { return string.Format("Scale={0}, Length={1}", Scale, Length); } } public static class Extensions { public static DecimalInfo GetInfo(this decimal value) { string decStr = value.ToString().Replace("-", ""); int decpos = decStr.IndexOf("."); int length = decStr.Length - (decpos < 0 ? 0 : 1); int scale = decpos < 0 ? 0 : length - decpos; return new DecimalInfo { Scale = scale, Length = length }; } } 
-1
source

You can create two small methods (if you want to split into methods) using convert + string:

If you convert the decimal to String ( var d = 4.555M; d.ToString(); ), you can use Split('.') , And the index 0 of the String array is your precision, and the index 1 is scale. But decimal also has a .Truncate method that gives you accuracy anyway.

-2
source

Source: https://habr.com/ru/post/1235077/


All Articles