SQL Server 2008 - Decimal Understanding Problem

I need to insert decimal numbers into a SQL Server 2008 database. It seems that decimal () is the right data type to use, however, it is difficult for me to understand it exactly.

I found this script (scroll down for decimal):

http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=95322

Which allows me to test different decimal settings against numbers, and there are some I don’t understand why they pass or fail. The way I understand it, when using decimal (precision, scale) precision is the number of digits to the left of the decimal fraction, and scale is the number of digits to the right of the decimal fraction. Using this function, I do not understand why some are transmitted and why some of them do not work.

SELECT dbo.udfIsValidDECIMAL('2511.1', 6, 3) 

I have 4 digits on the left and 1 on the right, but that fails.

 SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 17) SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 16) 

The first failure, the second passes. There are 18 digits after the decimal point, so it seems that both should fail (or pass, and SQL truncates the number).

Perhaps I have a fundamental misunderstanding about how decimal () should work?

+4
source share
3 answers

Accuracy is the number of digits that can be stored as a whole.

Thus, the number to the left of the decimal number will be an exact scale.

For example, your first example will fail because you only allow three places to the left of the decimal:

 SELECT dbo.udfIsValidDECIMAL('2511.1',6,3) 
+2
source

DECIMAL(6,3) means: only 6 digits, 3 of which are to the right of the decimal point.

So, you have 3 digits before, 3 digits after the decimal point and, of course, it cannot process 2511.1 - with the numbers four to the left of the decimal point. For this you need DECIMAL(7,3) .

See the MSDN documentation for DECIMAL :

decimal [(p [, s])] and numeric [(p [, s])]

p (accuracy)

The maximum total number of decimal digits that can be stored both to the left and to the right of the decimal point. Accuracy should be a value from 1 to a maximum accuracy of 38. The default accuracy is 18.

s (scale)

The maximum number of decimal digits that can be stored to the right of the decimal point. The scale must be a value from 0 to p. Scale can only be specified if accuracy is specified. By default, the scale is 0; therefore, 0 <= s <= p. Maximum storage sizes depend on accuracy.

+4
source
 cast(10.123456789123456789 as decimal(18,17)) 

Accuracy 18 and scale 17 allow only 1 digit to the left of the decimal point, but in this example 2.

 cast(10.123456789123456789 as decimal(18,16) 

Has a place for two digits, so it succeeds.

+1
source

Source: https://habr.com/ru/post/1400255/