The following expression evaluates to false in C #:
(1 + 1 + 0.85) / 3 <= 0.95
And I believe that this is done in most other programming languages that implement IEEE 754, since (1 + 1 + 0.85) / 3 evaluates to 0.95000000000000007 , which is more than 0.95 .
However, although Excel should implement most of IEEE 754, too , the following evaluates the TRUE value in Excel 2013:
= ((1 + 1 + 0.85) / 3 <= 0.95)
Is there any specific reason for this? The above article does not mention any custom Excel implementations that could lead to this behavior. Can you tell Excel strictly according to IEEE 754?
Note that while most Excel questions should be asked at superuser.com, this question is about floating point arithmetic, which is a common problem in programming languages. From the point of view of this topic, Excel is a programming language such as C # or Java.
source share