There are two different problems here. At first - as mentioned in the comment - a binary floating point number cannot represent the number 8.7 exactly. Swift uses the IEEE 754 standard to represent single and double precision floating point numbers, and if you assign
let x = 8.7
that closest represented number is stored in x and this
8.699999999999999289457264239899814128875732421875
Much more information about this can be found in the excellent Q&A Shot Down Math Floating Point? .
The second problem: why is the number sometimes printed as "8.7" and sometimes as "8.6999999999999993"?
let str = "8.7" print(Double(str)) // Optional(8.6999999999999993) let x = 8.7 print(x) // 8.7
Is Double("8.7") different from 8.7 ? Is more accurate than another?
To answer these questions, we need to know how the print() function works:
- If the argument matches
CustomStringConvertible , the print function calls its description property and prints the result to standard output. - Otherwise, if the argument matches
CustomDebugStringConvertible , the call to the print function is the debugDescription property and print the result to standard outputs. - Otherwise, a different mechanism is used. (Not imported here for our purpose.)
Double type corresponds to CustomStringConvertible , therefore
let x = 8.7 print(x) // 8.7
produces the same conclusion as
let x = 8.7 print(x.description) // 8.7
But what happens in
let str = "8.7" print(Double(str)) // Optional(8.6999999999999993)
Double(str) is optional, and struct Optional doesn't match CustomStringConvertible , but for CustomDebugStringConvertible . Therefore, calling the print function is the debugDescription Optional property, which in turn calls the base Double debugDescription . Therefore, in addition to the optional, the output of the number is the same as in
let x = 8.7 print(x.debugDescription) // 8.6999999999999993
But what is the difference between description and debugDescription for floating point values? From the Swift source code, you can see that both ultimately call swift_floatingPointToString in Stubs.cpp , with the Debug parameter set to false and true , respectively. This controls the accuracy of converting a number to a string:
int Precision = std::numeric_limits<T>::digits10; if (Debug) { Precision = std::numeric_limits<T>::max_digits10; }
For the meaning of these constants, see http://en.cppreference.com/w/cpp/types/numeric_limits :
digits10 - the number of decimal digits that can be represented without changes,max_digits10 - the number of decimal digits needed to differentiate all values โโof this type.
So, description creates a string with smaller decimal digits. That a string can be converted to Double and back to a string giving the same result. debugDescription creates a string with more decimal digits, so any two different floating point values โโwill give a different result.
Summary:
- Most decimal numbers cannot be represented exactly as binary floating point values.
- The
description and debugDescription floating point types use different precision to convert to string. Consequently, - printing an optional floating point value uses a different precision for conversion than printing an optional value.
Therefore, in your case, you probably want to expand the optional before printing:
let str = "8.7" if let d = Double(str) { print(d) // 8.7 }
For better control, use NSNumberFormatter or formatted print with %.<precision>f .
Another option would be to use (NS)DecimalNumber instead of Double (for example, for currency amounts), see, for example, the Round Question in the quick one .