I need to convert Float to Decimal (28,10) in SQL Server. My problem is that due to the nature of the float and the way it is converted, just using float may make it wrong for my users.
For instance:
Float: 280712929.22 Cast as Decimal: 280712929.2200000300 What I think I want: 280712929.2200000000
I am a little versed in how the float works (this is an example data type, etc.), but admittedly is not enough to understand why it adds 300 at the end. Is it just rubbish as a side effect of the conversion, or is it somehow a more accurate idea of ββwhat the float actually stores? It seems to me that he pulled accuracy out of thin air.
Ultimately, I need it to be accurate, but also look "right." I think I need to get this lower number, since then it looks like I just added trailing zeros. Is it possible? Is this a good or bad idea, and why? Other suggestions are welcome.
Some other examples:
Float: 364322379.5731 Cast as Decimal: 364322379.5730999700 What I want: 364322379.5731000000 Float: 10482308902 Cast as Decimal: 10482308901.9999640000 What I want: 10482308902.0000000000
Side note: the new database table in which I put these values ββis being read by my user. In fact, they only need two decimal places right now, but that may change in the future, so we decided to go with Decimal (28.10). The long-term goal is to convert the float columns into which I get my data from decimal.
EDIT: Sometimes floats that have more decimals than I ever need, for example: -0.628475064730907. In this situation, casting to -0.6284750647 is just fine. I basically need my result to add zeros to the end of the float until I have 10 decimal places.
source share