I need to insert decimal numbers into a SQL Server 2008 database. It seems that decimal () is the right data type to use, however, it is difficult for me to understand it exactly.
I found this script (scroll down for decimal):
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=95322
Which allows me to test different decimal settings against numbers, and there are some I donβt understand why they pass or fail. The way I understand it, when using decimal (precision, scale) precision is the number of digits to the left of the decimal fraction, and scale is the number of digits to the right of the decimal fraction. Using this function, I do not understand why some are transmitted and why some of them do not work.
SELECT dbo.udfIsValidDECIMAL('2511.1', 6, 3)
I have 4 digits on the left and 1 on the right, but that fails.
SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 17) SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 16)
The first failure, the second passes. There are 18 digits after the decimal point, so it seems that both should fail (or pass, and SQL truncates the number).
Perhaps I have a fundamental misunderstanding about how decimal () should work?
source share