The fastest way to use large numbers

Is there a way to intelligently work with very large integers (millions or billions of digits)? The operations that I need to perform are simply +, -, *, and possibly /.

I am looking for algorithms to perform the above operations in a reasonable amount of time (for example, up to 1 hour on a modern PC). I am not opposed to using any type of representation for numbers, but if I need a different representation for each operation, then the conversion between different representations should also be done in a reasonable amount of time.

All the large number libraries I looked at are completely broken when used for these sizes. Is this a sign that there are no such algorithms or are these library representations / implementations simply not optimized for such sizes?

EDIT A limit of 1 hour is probably not possible. I gave this number because a simple cycle for a billion iterations should take less, and I was hoping for an algorithm that would use O (n) time. Does the 24 hour limit seem more acceptable?

+6
source share
1 answer

You can take a look at the DecInt Python class .

It is optimized for very long decimal integers. (The numbers are stored in a representation that simplifies the conversion to decimal numbers in O (n) time).

He can perform the necessary operations, including:

  • Multiplication in O (n ln (n))
  • Separation in O (n lnn (n) ^ 2)
+2
source

Source: https://habr.com/ru/post/956872/


All Articles