Are Python floats and PostgreSQL double-precision types based on the same C implementation? This may not be the real main problem here, but in any case, this is what I get when I try to manipulate small numbers in both environments:
In Python (2.7.2 GCC 4.2.1, if necessary):
>>> float('1e-310') 1e-310
In PostgreSQL (9.1.1):
postgres# select 1e-310::double precision; ERROR: "0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001" is out of range for type double precision
I understand that the Python type float "handles" 1e-310, while the PostgreSQL double-precision type does not. Both Python and PostgreSQL docs on, respectively, "float" and "double precision", refer to the IEEE 754 standard, which is supposed to be implemented on "most platforms" (I'm on OS X Lion 10.7.3).
Can anyone explain what is going on here? And give me a solution, I would like, for example, to βreduceβ the accuracy of Python so that I can insert floats into my database through Django FloatField. (The full use case is that I read data from a file and then insert it).
Some (maybe interesting) additional information in Python:
>>> sys.float_info sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1) >>> 1e-320.__sizeof__() 24
I really don't get the second one.