I am working on an application that does a lot of floating point computing. We use VC ++ for Intel x86 floating point with double precision. We claim that our calculations are accurate up to n decimal digits (now 7, but allegedly 15).
We make a lot of efforts to check our results compared to other sources, when our results change a little (due to code refactoring, cleaning, etc.). I know that many factors affect overall accuracy, such as the state of the FPU control, the compiler / optimizer, the floating-point model, and the general order of operations (i.e. the algorithm itself), but given their inherent uncertainty in FP calculations (e.g. , 0.1 cannot be represented), it seems invalid to require any particular degree of accuracy for all calories.
My question is this: is it really possible to claim the accuracy of FP calculations without any analysis at all (for example, interval analysis)? If so, what claims can be made and why?
EDIT:
So, given that the input data is accurate, for example, in ten-digit places, can the result of any arbitrary calculations be guaranteed, given that double precision is used? For example, if the input has 8 significant decimal digits, the output will have at least 5 significant decimal digits ...?
We use math libraries and do not know any guarantees that they can or cannot make. The algorithms we use are not necessarily analyzed for accuracy in any case. But even taking into account a specific algorithm, the implementation will affect the results (for example, simply by changing the order of the two add operations). Is there any inherent guarantee when using, say, double precision?
OTHER CHANGE:
We empirically confirm our results in comparison with other sources. So, are we just lucky when we achieve, say, 10-digit accuracy?