What is an effective way to verify the accuracy and scale of a numerical value?

I am writing a procedure that checks the data before inserting it into the database, and one of the steps is to see if the numerical values ​​correspond to the accuracy and scale of the SQL Server type (numeric (x, y)).

I have accuracy and scale from SQL-Server, but what is the most efficient way in C # to get the accuracy and scale of the CLR value, or at least check if this restriction fits?

I am currently converting the CLR value to a string, and then looking for the location of the decimal point with .IndexOf (). Is there a faster way?

+4
source share
4 answers
System.Data.SqlTypes.SqlDecimal.ConvertToPrecScale( new SqlDecimal (1234.56789), 8, 2) 

gives 1234.67. it will truncate additional digits after the decimal place and will generate an error, rather than trying to truncate digits to the decimal point (i.e. ConvertToPrecScale (12344234, 5.2)

+6
source

Without throwing an exception, you can use the following method to determine if a value matches accuracy and scale constraints.

 private static bool IsValid(decimal value, byte precision, byte scale) { var sqlDecimal = new SqlDecimal(value); var actualDigitsToLeftOfDecimal = sqlDecimal.Precision - sqlDecimal.Scale; var allowedDigitsToLeftOfDecimal = precision - scale; return actualDigitsToLeftOfDecimal <= allowedDigitsToLeftOfDecimal && sqlDecimal.Scale <= scale; } 
+4
source

You can use decimal.Truncate (val) to get the integral part of the value and decimal.Remainder (val, 1) to get the part after the decimal point, and then check that each part matches your limits (I guess what it might be simple> or <check)

0
source

It uses a mathematical approach.

 private static bool IsValidSqlDecimal(decimal value, int precision, int scale) { var minOverflowValue = (decimal)Math.Pow(10, precision - scale) - (decimal)Math.Pow(10, -scale) / 2; return Math.Abs(value) < minOverflowValue; } 

This takes into account how the sql server will do rounding and prevent overflow errors, even if we exceed accuracy. For instance:

 DECLARE @value decimal(10,2) SET @value = 99999999.99499 -- Works SET @value = 99999999.995 -- Error 
0
source

Source: https://habr.com/ru/post/1277658/


All Articles