Why does JavaScript think that 354224848179262000000 and 354224848179261915075 are equal?

So, I started by trying to find the 100th Fibonacci number using a recursive function, and a memoizing function using the following code.

Function.prototype.memoize = function () { var originalFunction = this, slice = Array.prototype.slice; cache = {}; return function () { var key = slice.call(arguments); if (key in cache) { return cache[key]; } else { return cache[key] = originalFunction.apply(this, key); } }; }; var fibonacci = function (n) { return n === 0 || n === 1 ? n : fibonacci(n - 1) + fibonacci(n - 2); }.memoize(); console.log(fibonacci(100)); 

Now, as you can see in this script, the result is JavaScript log 354224848179262000000. The hundredth Fibonacci number actually corresponds to 354224848179261915075 according to WolframAlpha , which is correct.

Now, my question is this. Why is the number calculated incorrectly even if the algorithm is completely normal? My thoughts point to JavaScript, because according to Google calculator 1, the two numbers are equal.

What is JavaScript that causes such an error? The number is safely within the maximum value of the IEEE 754 number, which is 1.7976931348623157e + 308.

1 In case this may be a bug on my platform, I tested it on both Chromium and Firefox on Ubuntu.

+5
source share
2 answers

As your number gets bigger, your loss of precision, the maximum safe number in JavaScript is actually Number.MAX_SAFE_INTEGER === 9007199254740991

For each additional bit that is required, you lose 1 bit of precision, because the last bit is considered to be zero.

according to IEEE754 354224848179262000000 has a value in binary format:

 0 10001000011 0011001100111101101101110110101001111100010110010110 

The exponent, 10001000011 is 1091, which results in 68 if you subtract 1023.
This means that you use 68 bits to represent your significant, since only 52 of them are available for significant, the last 16 bits are considered zero. Any calculations that fall into these 16 bits will have no effect.

+12
source

Because the double precision floating point IEEE-754 provides only 15 to 17 significant digits of accuracy.

+4
source

Source: https://habr.com/ru/post/1200864/


All Articles