I did it Just for kicks (so itโs not quite a question, I can see that the descent is already happening), but instead of Google newfound, the inability to do the math correctly (note this! According to google 500 000 000 000 000 000 - 500 000 000 000 000 001 = 0), I thought I would try the following in C to run a little theory.
int main() { char* a = "399999999999999"; char* b = "399999999999998"; float da = atof(a); float db = atof(b); printf("%s - %s = %f\n", a, b, da-db); a = "500000000000002"; b = "500000000000001"; da = atof(a); db = atof(b); printf("%s - %s = %f\n", a, b, da-db); }
When you run this program, you get the following
399999999999999 - 399999999999998 = 0.000000 500000000000002 - 500000000000001 = 0.000000
It would seem that Google uses simple 32-bit floating precision (error here), if you switch float to double in the above code, you will fix the problem! Could it be?
/ tr
source share