To answer your first question, see the following (slightly compressed) code from a Python source:
#define PREC_REPR 17 #define PREC_STR 12 void PyFloat_AsString(char *buf, PyFloatObject *v) { format_float(buf, 100, v, PREC_STR); } void PyFloat_AsReprString(char *buf, PyFloatObject *v) { format_float(buf, 100, v, PREC_REPR); }
Basically, repr(float) will return a string formatted with 17 precision digits, and str(float) will return a string with 12 precision digits. As you might have guessed, print uses str() , and typing a variable name in the interpreter uses repr() . With only 12 digits of accuracy, it looks like you are getting the “correct” answer, but that’s only because what you expect and the actual value are up to 12 digits.
Here is a brief example of the difference:
>>> str(.1234567890123) '0.123456789012' >>> repr(.1234567890123) '0.12345678901230001'
As for your second question, I suggest you read the following section of the Python manual: Floating-point arithmetic: problems and limitations
This boils down to efficiency, reduced memory, and faster floating point operations when you store base decimal decimal numbers 10 in base 2 than any other representation, but you need to deal with inaccuracies.
As JBernardo noted in the comments, this behavior is different in Python 2.7 and later, the following quote from the tutorial link above describes the difference (using 0.1 as an example):
In versions prior to Python 2.7 and Python 3.1, Python rounded this value to 17 significant digits, giving "0.10000000000000001. In current versions, Python displays the value based on the shortest decimal fraction, which will correctly return to the true binary value, resulting in just 0.1.