In terms of language definition, JavaScript numbers are 64-bit floating point.
(Except for bitwise operations, which use 32-bit integers. I believe that the latter is set even on a 64-bit CPU, for example, it 1 << 33should be 2, even if the CPU could do better, for backward compatibility.)
However, if the compiler can prove that the number is used only as an integer, it may prefer to implement it as such for efficiency, for example.
for (var i = 0; i < Math.pow(2, 40); i++)
console.log(i)
It is clear that it is desirable to implement this using integers, and in this case, for correctness, you need to use 64-bit integers.
Now consider this case:
for (var i = 0; i < Math.pow(2, 60); i++)
console.log(i)
If implemented with floating point numbers, then the above will not be true, since floating point cannot accurately represent integers greater than fifty-three bits.
If implemented with 64-bit integers, it works fine (well, except for an inconveniently long time).
Is the JavaScript compiler allowed (both the letter of the standard and compatibility with existing existing code) to use 64-bit integers in cases where they provide different but better results than floating point?
Similarly, if the JavaScript compiler provides arrays with more than four billion elements, is it allowed to implement array lengths and indices as 64-bit integers?