Google Calculator Glitch, can swimming against double be a possible cause?

I did it Just for kicks (so itโ€™s not quite a question, I can see that the descent is already happening), but instead of Google newfound, the inability to do the math correctly (note this! According to google 500 000 000 000 000 000 - 500 000 000 000 000 001 = 0), I thought I would try the following in C to run a little theory.

int main() { char* a = "399999999999999"; char* b = "399999999999998"; float da = atof(a); float db = atof(b); printf("%s - %s = %f\n", a, b, da-db); a = "500000000000002"; b = "500000000000001"; da = atof(a); db = atof(b); printf("%s - %s = %f\n", a, b, da-db); } 

When you run this program, you get the following

  399999999999999 - 399999999999998 = 0.000000 500000000000002 - 500000000000001 = 0.000000 

It would seem that Google uses simple 32-bit floating precision (error here), if you switch float to double in the above code, you will fix the problem! Could it be?

/ tr

+4
source share
7 answers

in C #, try (double.maxvalue == (double.maxvalue - 100)), you will get the truth ...

but this is what is supposed to be:

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems

thinking about this, you have 64 bits representing a number greater than 2 ^ 64 (double.maxvalue), so an inaccuracy is expected.

+2
source

For more information on such nonsense, see this nice article related to the Windows calculator.

When you change the insides no one notices

The internals of Calc - the arithmetic engine - have been completely discarded and rewritten from scratch. The IEEE standard floating point library has been replaced with an arbitrary precision arithmetic library. This was done after people continued to write haha โ€‹โ€‹of articles about how Calc could not perform decimal arithmetic correctly, which, for example, calculating 10.21 - 10.2 led to 0.0100000000000016.

+4
source

It would seem that Google uses simple 32-bit floating precision (error here), if you switch float to double in the above code, you will fix the problem! Could it be?

No, you just put off the problem. doubles still show the same problem, only with large numbers.

+2
source

@ebel

thinking about this, you have 64 bits representing a number greater than 2 ^ 64 (double.maxvalue), so an inaccuracy is expected.

2 ^ 64 is not the maximum value of a double. 2 ^ 64 is the number of unique values โ€‹โ€‹that can have a double (or any other 64-bit type). Double.MaxValue is 1.79769313486232e308.

Inaccuracy with floating point numbers does not come from representing values โ€‹โ€‹more than Double.MaxValue (which is impossible, except for Double.PositiveInfinity ). This is because the desired range of values โ€‹โ€‹is simply too large to fit into the data type. Therefore, we refuse precision in exchange for a more efficient range. In essence, we are discarding significant numbers in return for a larger range of exhibitors.

@DrPizza

Not even; IEEE encodings use multiple encodings for the same values. In particular, NaN appears to be an indicator of all bit-1s, and then any non-zero value for the mantissa. Thus, there are 252 NaN for doubles, 223 NaN for singles.

True I did not take into account repeated encodings. Actually there are 2 52 -1 NaN for doubles and 2 23 -1 NaN for singles .: P

+1
source

2 ^ 64 is not the maximum value of a double. 2 ^ 64 is the number of unique values โ€‹โ€‹that can have a double (or any other 64-bit type). Double.MaxValue is 1.79769313486232e308.

Not even; IEEE encodings use multiple encodings for the same values. In particular, NaN appears to be an indicator of all bit-1s, and then any non-zero value for the mantissa. Thus, for singles, there are 2 52 NaNs, 2 23 NaN for singles.

0
source

True I did not take into account repeated encodings. Actually there are 252-1 NaN for doubles and 223-1 NaN for singles .: P

Doh, forgot to subtract infinity.

0
source

The rough evaluation version of this problem that I found out about is that 32-bit floats give you 5 digits of precision and 64-bit floats that give you 15 digits of accuracy. This, of course, will vary depending on how the floats are encoded, but this is a pretty good starting point.

0
source

Source: https://habr.com/ru/post/1276432/


All Articles