Which is better if I want to keep as much accuracy as possible when calculating using IEEE-754 floating point values:
a = b * c / d
or
a = b / d * c
Is there any difference? If so, does it depend on the values of the input values? And, if a quantity matters, how is the best order determined when the general values of the values are known?
source
share