I am the author of the DecInt library (decimal integer), so I will make a few comments.
The DecInt library was specifically designed to work with very large integers that need to be converted to decimal format. The problem with converting to decimal format is that most libraries of arbitrary precision store values โโin binary format. This is the fastest and most efficient way to use memory, but the transition from binary to decimal is usually slow. Converting Python to binary and decimal conversion uses the O (n ^ 2) algorithm and slows down quickly.
DecInt uses a large decimal place (usually 10 ^ 250) and stores a very large number in 250-digit blocks. Converting a very large number to decimal format is now done in O (n).
A naive or cool school, multiplication has a run time of O (n ^ 2). Python uses the Karatsuba multiplication, which has an O (n ^ 1.585) run time. DecInt uses a combination of Karatsuba, Toom-Cook, and Nussbaumer convolutions to get O (n * ln (n)) runtimes.
Although DecInt has much higher overhead, the combination of O (n * ln (n)) multiplication and O (n) transform will ultimately be faster than Python O (n ^ 1.585) and O (n ^ 2) multiplication conversion.
Since most calculations do not require each result to be displayed in decimal format, almost every library uses binary code with arbitrary precision, as this simplifies the calculation. DecInt targets a very small niche. For sufficiently large numbers, DecInt will be faster for multiplication and division than native Python. But if you are after pure performance, a library such as GMPY will be the fastest.
I'm glad you found DecInt useful.
source share