JavaScript: efficient integer arithmetic

I am currently writing a compiler for a small language that compiles in JavaScript. In this language, I would really like to have integers, but JavaScript only supports Number, which is a double-precision floating-point value. So what is the most efficient way to implement integers in JavaScript? And how effective is this compared to using just a number?

In particular, overflow behavior should be compatible with other languages: for example, adding one to INT_MAX should give INT_MIN. Integers must be either 32-bit or 64-bit.

+6
source share
7 answers

I found this implementation of BigIntegers in Javascript: http://www-cs-students.stanford.edu/~tjw/jsbn/

Perhaps this will help?

Edit: In addition, the Google Closure library implements 64-bit integers: http://code.google.com/p/closure-library/source/browse/trunk/closure/goog/math/long.js

This is essentially just generating convenience objects, but they won’t do anything to improve the efficiency of the underlying data type.

+3
source

So what is the most efficient way to implement integers in JavaScript?

A primitive number type is just as efficient as it is. Many modern JS engines support JIT compilation, so it should be almost as efficient as the built-in floating point arithmetic.

In particular, overflow behavior should be compatible with other languages: for example, adding one to INT_MAX should give INT_MIN. Integers must be either 32-bit or 64-bit.

You can get the semantics of standard 32-bit integer arithmetic by noting that JavaScript converts “numbers” to 32-bit integers for bitwise operations. >>> (unsigned right shift) converts its operand to an unsigned 32-bit integer, and the rest (all other shifts and bitwise AND / OR) converts their operand to a 32-bit integer. For instance:

  • 0xFFFFFFFF | 0 0xFFFFFFFF | 0 gives -1 (signed translation)
  • (0xFFFFFFFF + 1) | 0 (0xFFFFFFFF + 1) | 0 gives 0 (overflow)
  • -1 >>> 0 gives 0xFFFFFFFF (unsigned)
+8
source

All numbers are numbers. There is no such thing. JavaScript has no byte or int type. Either work with restrictions, or use a lower level language to write your compiler.

The only reasonable option if you want to achieve this is to edit one of the JavaScript interpreters (e.g. V8) and extend JS to allow access to C.'s own bytes.

+1
source

On a modern processor, if you limit your integer values ​​to the range + - 2 ^ 52, then using double will be slightly less efficient than using long .

The double type IEE754 has 53 bits of mantissa, so you can easily imagine a 32-bit integer range, and then some.

In any case, the rest of the Javascript will be a much more bottleneck than the individual processor instructions used to process arithmetic.

+1
source

Well, you can choose the type of JavaScript number, which is probably calculated using the primitives of your processor, or you can choose the layer of the complete package of operators and functions and what is not on the emulated series of bits ...?

... If productivity and efficiency are your concern, stick with doubles.

0
source

The most effective would be to use numbers and add operations to make sure that operations with simulated integers will give an integer result. For example, the division must be rounded, and the multiplication must be checked for overflow or masked to match the integer range.

This, of course, means that floating-point operations in your language will be significantly faster than whole operations, primarily defeating most of the purpose of the integer type.

0
source

Note that ECMA-262 Edition 3 Number.prototype.toFixed is added, which takes an exact argument indicating how many digits after the decimal point to show. Use this method well, and you will not mind the inequality between the final precision base 2 and the “arbitrary” or “suitable” precision base 10 that we use every day. - Brendan Eich

0
source

Source: https://habr.com/ru/post/886181/


All Articles