I am trying to separate two numbers, the numerator N by the divisor D. I use the Newton-Raphson method, which uses the Newton method to find the back of D (1 / D). Then the division result can be found by multiplying the numerator N by the inverse 1 / D to get N / D.
The Newton-Raphson algorithm can be found here.
So, the first step of the algorithm is the initial assumption for 1 / D, which we call X_0.
X_0 is defined as X_0 = 48 / 17-39 / 17 * D
However, we must first apply the bit-shift to the divisor D in order to scale it so that 0.5 β€ D β€ 1. The same bit-shift must be applied to the numerator N so that the factor does not change.
Then we find X_ (i + 1) using the formula X_ (i + 1) = X_i * (2-D * X_i)
Since both the numerator N, the divisor D, and the result are an IEEE-754 32-bit floating point format, I am wondering how to apply this scaling correctly, since my value for 1 / D does not converge to the value, it just fits -Inf or + Inf (depending on D).
What I found works, although it is that if I make X_0 less than 1 / D, the algorithm seems to always converge. Therefore, if I just use the lookup table, where I always store a bunch of 1 / D values, and I can always guarantee that I have the saved 1 / D value, where D> Dmin, then I should be fine. But is this standard practice?
source share