Midpoint "rounding" when working with large numbers?

Therefore, I tried to understand the behavior of JavaScript when working with large numbers. Consider the following (tested in Firefox and Chrome):

console.log(9007199254740993) // 9007199254740992 console.log(9007199254740994) // 9007199254740994 console.log(9007199254740995) // 9007199254740996 console.log(9007199254740996) // 9007199254740996 console.log(9007199254740997) // 9007199254740996 console.log(9007199254740998) // 9007199254740998 console.log(9007199254740999) // 9007199254741000 

Now I know why it prints β€œwrong” numbers β€” it tries to convert them to floating point representations and round them to the nearest possible floating point value β€” but I'm not quite sure why this selects these specific numbers. I suppose that he is trying to round to the nearest β€œeven” number, and from 9007199254740996 is divisible by 4, but 9007199254740994 is not, he believes that 9007199254740996 will be more β€œeven”.

  • What algorithm is used to determine the internal representation? I assume this is an extension of the regular rounding of the midpoint (rounded to even - the default rounding mode in IEEE 754 functions).
  • Is this behavior indicated as part of the ECMAScript standard or is implementation dependent?
+4
javascript floating-point floating-point-precision floating-point-conversion
Jun 24 '14 at 23:35
source share
3 answers

As Mark Dickinson noted in a comment on this subject, the ECMA-262 ECMAScript Language Specification requires the use of the IEEE 754 64-bit binary floating-point to represent the type of number . Relevant rounding rules: "Select the member of this set that is closest to the value of x. If the two values ​​of the set are equally close, select the one that has an even value ...".

These rules are general, applying arithmetic results as well as literal values ​​to rounding.

Listed below are all the numbers in the corresponding range for the question, which are exactly represented in the IEEE 754 64-bit binary floating point. Each of them is shown as its decimal value, as well as a hexadecimal representation of its bit diagram. A number with an even value has the smallest hexadecimal digit in the bitmap.

 9007199254740992 bit pattern 0x4340000000000000 9007199254740994 bit pattern 0x4340000000000001 9007199254740996 bit pattern 0x4340000000000002 9007199254740998 bit pattern 0x4340000000000003 9007199254741000 bit pattern 0x4340000000000004 

Each of the even inputs is one of these numbers and rounds to this number. Each of the odd inputs is exactly halfway between the two of them, and the rounds are with an even value. This results in rounding of the odd entries to 9007199254740992, 9007199254740996 and 9007199254741000.

+4
Jun 25 '14 at 3:25
source share

Patricia Shanahan's answer helped a lot and explained my main question. Nevertheless, in the second part of the question: does this behavior depend on the implementation - it turns out that yes, but in a slightly different way than I originally thought. Quote from ECMA-262 5.1 Β§ 7.8.3 :

& hellip; the rounded value should be the Number value for MV (as specified in 8.5), unless the literal is a decimal literal and the literal has more than 20 significant digits, in which case the Number value can be either the value of the number for the MV literal obtained by replacing each a significant digit after the 20th with the digit 0 or a number value for the MV literal obtained by replacing each significant digit after the 20th with 0 , and then increasing the letter in the 20th significant position.

In other words, an implementation can ignore everything after the 20th digit. Consider this:

 console.log(9007199254740993.00001) 

Both Chrome and Firefox output 9007199254740994 , however Internet Explorer outputs 9007199254740992 because it chooses to ignore after the 20th digit. Interestingly, this is not like standards-compliant behavior (at least when I read this standard). he should interpret it the same way as 9007199254740993.0001 , but it is not.

+1
Jun 26 '14 at 18:28
source share

JavaScript presents numbers as 64-bit floating point values. This is defined in the standard.

http://en.wikipedia.org/wiki/Double-precision_floating-point_format

So, there is nothing to do with rounding the middle.

As a hint, each 32-bit integer has an exact representation in a floating format with double precision.




Ok, since you're asking for the exact algorithm, I checked how the Chrome V8 engine works. V8 defines a StringToDouble function that calls InternalStringToDouble in the following file:

https://github.com/v8/v8/blob/master/src/conversions-inl.h#L415

And this, in turn, calls the Strotd function defined there:

https://github.com/v8/v8/blob/master/src/strtod.cc

0
Jun 24 '14 at 23:38
source share



All Articles