I have a code where one part of the calculations is performed using the NumPy and longdoubles functions, and the other with the symbolic differentiation and numerical evaluation of SymPy, and then combined (in Sympy float). Sympy evaluation can be performed with arbitrary accuracy, but what accuracy will be good enough, i.e. Won't pollute long-term results? As far as I understand, NumPy longdouble is actually only 80 bits, despite the fact that it is called float128on my system. Wikipedia says about 80-bit precision:
The conversion boundaries between decimal and binary for an 80-bit format can be set as follows: if a decimal string containing no more than 18 significant digits is correctly rounded to an 80-bit binary value with an IEEE 754 floating point (as in the input), then it is converted back to the same number of significant decimal digits (as for output), then the final line will exactly match the original; while, conversely, if an IEEE 754 80-bit binary value is correctly converted and (nearest) rounded to a decimal line with at least 21 significant decimal places, and then converted back to binary format, it will exactly match the original .
Also, I dig in an interactive prompt:
>>> numpy.finfo(numpy.double).precision
15
>>> numpy.dtype(numpy.double).itemsize
8
>>> numpy.finfo(numpy.longdouble).precision
18
>>> numpy.dtype(numpy.longdouble).itemsize
16
>>>
, wiki , , ( 18, 21), Numpy 18. , SymPy (15 vs. 15).
, longdouble Sympy float ( SymPy), SymPy ? 18 ? 21? ?
Python 2.7 Linux 64bit (Sandy Bridge), NumPy 1.6.2, SymPy 0.7.1.rc1. (nsk 130).