In C #, the Decimal type is actually a structure with overloaded functions for all math and comparison operations in base 10, so it will have less significant rounding errors. A float (and double), on the other hand, is akin to scientific notation in binary format. As a result, decimal types are more accurate when you know the precision you need.
Run this to see the difference in accuracy 2:
using System; using System.Collections.Generic; using System.Text; namespace FloatVsDecimal { class Program { static void Main(string[] args) { Decimal _decimal = 1.0m; float _float = 1.0f; for (int _i = 0; _i < 5; _i++) { Console.WriteLine("float: {0}, decimal: {1}", _float.ToString("e10"), _decimal.ToString("e10")); _decimal += 0.1m; _float += 0.1f; } Console.ReadKey(); } } }
cabgef Jun 17 '09 at 18:54 2009-06-17 18:54
source share