This is the next question to [ TSQL Number Rounding Question . This is the same code:
IF OBJECT_ID(N'dbo.rounding_testing') IS NOT NULL DROP FUNCTION dbo.rounding_testing; GO CREATE FUNCTION dbo.rounding_testing ( @value FLOAT, @digit INT ) RETURNS FLOAT BEGIN DECLARE @factor FLOAT, @result FLOAT; SELECT @factor = POWER(10, @digit); SELECT @result = FLOOR(@value * @factor + 0.4); RETURN @result; END; GO SELECT dbo.rounding_testing(5.7456, 3); SELECT FLOOR(5.7456 * 1000 + 0.4);
When you execute the code, you will receive:
5745 5746
However, when changing the data type from float to real in a function like this:
IF OBJECT_ID(N'dbo.rounding_testing') IS NOT NULL DROP FUNCTION dbo.rounding_testing; GO CREATE FUNCTION dbo.rounding_testing ( @value REAL, @digit INT ) RETURNS REAL BEGIN DECLARE @factor REAL, @result REAL; SELECT @factor = POWER(10, @digit); SELECT @result = FLOOR(@value * @factor + 0.4); RETURN @result; END; GO SELECT dbo.rounding_testing(5.7456, 3); SELECT FLOOR(5.7456 * 1000 + 0.4);
You will get this when doing:
5746 5746
As for the two answers on this question, I did a few more tests and found that it is still unclear. First, I would like to say that I read the msdn docs about float and real types and numeric and decimal types . And I know how SQL Server saves them internally. For float and real types , the IEEE 754 standard is used. For decimal and numeric types see How does SQL Server store decimal values inside? . I want to know which EXACT step caused a loss of precision in the case of a float . Therefore, I created the table as follows:
USE tempdb; GO IF OBJECT_ID('dbo.mytable') IS NOT NULL DROP TABLE dbo.mytable; CREATE TABLE dbo.mytable ( a NUMERIC(5, 4), b FLOAT, c FLOAT, d FLOAT, e FLOAT, f REAL, g REAL, h REAL, i REAL ); GO
Than I manually insert the intermediate data into this table.
INSERT INTO dbo.mytable VALUES( 5.7456, CAST(5.7456 AS FLOAT), CAST(POWER(10, 3) AS FLOAT), CAST(CAST(5.7456 AS FLOAT) * CAST(POWER(10, 3) AS FLOAT) AS FLOAT), CAST(CAST(5.7456 AS FLOAT) * CAST(POWER(10, 3) AS FLOAT) + 0.4 AS FLOAT), CAST(5.7456 AS REAL), CAST(POWER(10, 3) AS REAL), CAST(CAST(5.7456 AS REAL) * CAST(POWER(10, 3) AS REAL) AS REAL), CAST(CAST(5.7456 AS REAL) * CAST(POWER(10, 3) AS REAL) + 0.4 AS REAL));
After that, I use DBCC PAGE to examine the input string. The following is the raw string data:
0000000000000000: 10003900 0170e000 002497ff 907efb16 40000000 ..9..pà..$ÿ.~û .@... 0000000000000014: 0000408f 40999999 999971b6 40ffffff ffff71b6 ..@. @.....q¶@ÿÿÿÿÿq¶ 0000000000000028: 40f5dbb7 4000007a 44cd8cb3 450090b3 45090000 @õÛ·@..zDͳE..³E .. 000000000000003C: 00 .
This is the interpretation of the source data:
Column Stuff inserted Hex (little endian) Interpretation ------ ----------------------------------------------------------------------- ----------------------- -------------- a 5.7456 01 70 E0 00 00 Decimal 57456, the decimal point position is stored in catalog view b CAST(5.7456 AS FLOAT) 24 97 FF 90 7E FB 16 40 IEEE 754 double precision format, 5.7456 c CAST(POWER(10, 3) AS FLOAT) 00 00 00 00 00 40 8F 40 IEEE 754 double precision format, 1000 d CAST(CAST(5.7456 AS FLOAT) * CAST(POWER(10, 3) AS FLOAT) AS FLOAT) 99 99 99 99 99 71 B6 40 IEEE 754 double precision format, 5745.6 e CAST(CAST(5.7456 AS FLOAT) * CAST(POWER(10, 3) AS FLOAT) + 0.4 AS FLOAT)FF FF FF FF FF 71 B6 40 IEEE 754 double precision format, 5746 f CAST(5.7456 AS REAL) F5 DB B7 40 IEEE 754 single precision format, 5.7456 g CAST(POWER(10, 3) AS REAL) 00 00 7A 44 IEEE 754 single precision format, 1000 h CAST(CAST(5.7456 AS REAL) * CAST(POWER(10, 3) AS REAL) AS REAL) CD 8C B3 45 IEEE 754 single precision format, 5745.6 i CAST(CAST(5.7456 AS REAL) * CAST(POWER(10, 3) AS REAL) + 0.4 AS REAL)) 00 90 B3 45 IEEE 754 single precision format, 5746
From the hexadecimal interpretation, it seems to me that there is no loss of accuracy at any of the steps, regardless of whether it is float or real . So where exactly does the exact loss come from?