Newton-Raphson Division with Large Integers

I am doing the BigInt class as a programming exercise. It uses a vector of 2 complemented ints signatures in base-65536 (so 32-bit multiplications do not overflow. I will increase the base as soon as I get it completely).

All basic mathematical operations are encoded with one problem: the division is very slow with the main algorithm that I was able to create. (This kind of works like a binary division for each digit of a private ... I will not publish it if someone does not want to see it ....)

Instead of my slow algorithm, I want to use Newton-Raphson to find the (offset) inverse, and then multiply (and shift). I think I have my own foundation: you give the formula (x1 = x0 (2 - x0 * divisor)) is a good initial guess, and then after a number of iterations x converges to mutual. This part seems easy enough ... but I am encountering some problems when trying to apply this formula to large integers:

Problem 1:

Because I work with integers ... well ... I cannot use fractions. This, apparently, leads to the fact that x always diverges (x0 * the divisor should be <2, it seems?). My intuition tells me that there must be some change in the equation that would allow him to work with integers (with some accuracy), but I'm really trying to figure out what it is. (My lack of math skills beats me here). It seems to me that I need to find some equivalent equation, where instead of d there is d * [base ^ somePower] ? Could there be some equation of type (x1 = x0 (2 - x0 * d)) that works with integers?

Problem 2:

When I use Newton's formula to find the inverse of some numbers, the result ends up as a small fraction below what should answer ... for example. when trying to find the inverse of 4 (in decimal):

x0 = 0.3 x1 = 0.24 x2 = 0.2496 x3 = 0.24999936 x4 = 0.2499999999983616 x5 = 0.24999999999999999999998926258176 

If I represented numbers in base-10, I would like to get the result 25 (and remember that you need to shift the product by 2). With some feedback signals, such as 1/3, you can simply trim the result when you know that you have enough accuracy. But how can I get the correct response from the result above?

Sorry if this is too vague or I ask too much. I went through Wikipedia and all the research articles I could find on Google, but I feel like I'm banging my head on a wall. I appreciate any help anyone can give me!

...

Edit: Got an operation algorithm, although it is much slower than I expected. I really lost a lot of speed compared to my old algorithm, even by numbers with thousands of digits ... I still missed something. This is not a problem with multiplication, which is very fast. (I really use the Karatsuba algorithm).

For anyone interested, here is my current iteration of the Newton-Raphson algorithm:

 bigint operator/(const bigint& lhs, const bigint& rhs) { if (rhs == 0) throw overflow_error("Divide by zero exception"); bigint dividend = lhs; bigint divisor = rhs; bool negative = 0; if (dividend < 0) { negative = !negative; dividend.invert(); } if (divisor < 0) { negative = !negative; divisor.invert(); } int k = dividend.numBits() + divisor.numBits(); bigint pow2 = 1; pow2 <<= k + 1; bigint x = dividend - divisor; bigint lastx = 0; bigint lastlastx = 0; while (1) { x = (x * (pow2 - x * divisor)) >> k; if (x == lastx || x == lastlastx) break; lastlastx = lastx; lastx = x; } bigint quotient = dividend * x >> k; if (dividend - (quotient * divisor) >= divisor) quotient++; if (negative)quotient.invert(); return quotient; } 

And here is my (really ugly) old algorithm faster:

 bigint operator/(const bigint& lhs, const bigint & rhs) { if (rhs == 0) throw overflow_error("Divide by zero exception"); bigint dividend = lhs; bigint divisor = rhs; bool negative = 0; if (dividend < 0) { negative = !negative; dividend.invert(); } if (divisor < 0) { negative = !negative; divisor.invert(); } bigint remainder = 0; bigint quotient = 0; while (dividend.value.size() > 0) { remainder.value.insert(remainder.value.begin(), dividend.value.at(dividend.value.size() - 1)); remainder.value.push_back(0); remainder.unPad(); dividend.value.pop_back(); if (divisor > remainder) { quotient.value.push_back(0); } else { int count = 0; int i = MSB; bigint value = 0; while (i > 0) { bigint increase = divisor * i; bigint next = value + increase; if (next <= remainder) { value = next; count += i; } i >>= 1; } quotient.value.push_back(count); remainder -= value; } } for (int i = 0; i < quotient.value.size() / 2; i++) { int swap = quotient.value.at(i); quotient.value.at(i) = quotient.value.at((quotient.value.size() - 1) - i); quotient.value.at(quotient.value.size() - 1 - i) = swap; } if (negative)quotient.invert(); quotient.unPad(); return quotient; } 
+6
source share
3 answers

First of all, you can implement time division O(n^2) and with a reasonable constant, so it is (not so) slower than naive multiplication. However, if you use the Karatsuba-like algorithm or even FFT , then you can really speed up the division algorithm using Newton-Raphson.

The Newton-Raphson iteration to calculate the reciprocal of x is q[n+1]=q[n]*(2-q[n]*x) .

Suppose we want to compute floor(2^k/B) , where B is a positive integer. WLOG, B≤2^k ; otherwise the coefficient is 0 . The Newton-Raphson iteration for x=B/2^k gives q[n+1]=q[n]*(2-q[n]*B/2^k) . we can change it like

q[n+1]=q[n]*(2^(k+1)-q[n]*B) >> k

Each iteration of this type requires only integer multiplications and bit shifts. Does it converge to floor(2^k/B) ? Not necessary. However, in the worst case, it eventually alternates between floor(2^k/B) and ceiling(2^k/B) (Prove it!). That way, you can use a not-so-smart test to make sure you are in this case, and extract floor(2^k/B) . (this "not very smart test" should be much faster than the multiplication at each iteration, but it will be good to optimize this thing).

Indeed, calculating floor(2^k/B) enough to calculate floor(A/B) for any natural numbers A,B Take k such that A*B≤2^k and check floor(A/B)=A*ceiling(2^k/B) >> k .

Finally, a simple but important optimization for this approach is to truncate the multiplications (i.e., calculate only the highest bits of the product) in the early iterations of the Newton-Raphson method. The reason for this is that the results of early iterations are far from private, and it does not matter if they are performed inaccurately. (Clarify this argument and show that if you do this thing appropriately, you can divide the two integers ≤n -bit by the time O(M(2n)) , assuming you can multiply the two integers ≤k bits by time M(k) and M(x) is a growing convex function).

+6
source

Newton-Rafson, an approximation algorithm, is not suitable for use in integer mathematics. You will get rounding errors that will lead to the problems you see. You can make the problem with floating point numbers and then see if you get an integer that is exact with a given number of digits (see the next paragraph).

Regarding the second problem, select the precision (number of decimal places) that you want for accuracy and round to that accuracy. If you selected twenty precision digits in the task, you would have rounded to 0.25. You just need to iterate until your required precision numbers are stable. In the general case, representing irrational numbers on a computer often leads to inaccuracies.

+1
source

If I see this correctly, the main improvement is a good initial value for x. Knowing how many digits the divider knows where the most significant bit of the inversion should be, since

 1/x = pow(2,log2(1/x)) 1/x = pow(2,-log2(x)) 1/x >= pow(2,-floor(log2(x))) 

floor (log2 (x)) is simply the index of the most significant bit set.

0
source

Source: https://habr.com/ru/post/980593/


All Articles