How can I write a nutrition function myself?

I was always wondering how I can make a function that calculates power itself (for example, 2 3 ). In most languages ​​they are included in the standard library, mainly like pow(double x, double y) , but how can I write it myself?

I thought about for loops , but he thought that my brain got into a loop (when I wanted to make power with a non-integer indicator, for example 5 4.5 or negative 2 -21 ), and I went crazy;)

So, how can I write a function that calculates the power of a real number? Thanks




Oh, perhaps it’s important to note: I cannot use functions that use permissions (like exp ), which would make it ultimately useless.

+47
c ++ math floating-point
May 21 '10 at 14:01
source share
13 answers

Negative forces are not a problem; they are simply the inverse ( 1/x ) of positive power.

Floating point forces are a little more complicated; as you know, fractional power is equivalent to the root (for example, x^(1/2) == sqrt(x) ), and you also know that factors with the same base are equivalent to add their exponents.

With all of the above, you can:

  • Expand the exponent in the integer part and the rational part .
  • Calculate the integer power using a loop (you can optimize its factorization and reuse of partial calculations).
  • Compute the root with any algorithm you like (any iterative approximation, such as bisection or Newton's method, may work).
  • Multiply the result.
  • If the exponent is negative, apply the inverse.

Example:

 2^(-3.5) = (2^3 * 2^(1/2)))^-1 = 1 / (2*2*2 * sqrt(2)) 
+46
May 21 '10 at 14:15
source share

A B = Log -1 (Log (A) * B)

Edit: yes, this definition does provide something useful. For example, on x86 it almost immediately converts to FYL2X (Y * Log 2 (X)) and F2XM1 (2 x -1):

 fyl2x fld st(0) frndint fsubr st(1),st fxch st(1) fchs f2xmi fld1 faddp st(1),st fscale fstp st(1) 

The code ends a little longer than you might expect, primarily because F2XM1 only works with numbers in the range -1.0..1.0. The part fld st(0)/frndint/fsubr st(1),st subtracts the integer part, so we are left with only a fraction. We apply F2XM1 to it, add 1 back, then use FSCALE to process the whole part of the exponentiality.

+22
May 21 '10 at 14:08
source share

As a rule, the implementation of the pow(double, double) function in mathematical libraries is based on an identifier:

 pow(x,y) = pow(a, y * log_a(x)) 

Using this identity, you only need to know how to raise one number a to an arbitrary exponent and how to take the logarithmic base of a . You have effectively turned a complex function with several variables into two functions of one variable and multiplication, which is quite easy to implement. The most often chosen values ​​of a are e or 2 - e , because e^x and log_e(1+x) have some very good mathematical properties and 2 because it has some good properties to implement in floating point arithmetic.

The trick of this method is that (if you want to get full accuracy) you need to calculate the term log_a(x) (and its product with y ) with greater accuracy than the floating point representation of x and y . For example, if x and y are double, and you want to get a high precision result, you will need to somehow save the intermediate results (and do arithmetic) in a higher precision format. The Intel x87 format is a common choice, as well as 64-bit integers (although if you really want to implement a high-quality implementation, you will need to do some 96-bit integer calculations, which are a little painful in some languages). This is much easier to handle if you implement powf(float,float) , because then you can just use double for intermediate calculations. I would recommend starting with this if you want to use this approach.




The algorithm that I have outlined is not the only possible way to calculate pow . It is simply the most suitable for delivering a high-speed result that satisfies fixed a priori accuracy. It is less suitable in some other contexts and, of course, much more difficult to implement than the re-square [root] -ing algorithm proposed by some others.

If you want to try the re-square [root] algorithm, start by writing an integer function with an unsigned integer that uses only re-squaring. Once you understand the algorithm for this smaller case well, you will find a fairly simple way to extend it to handle fractional metrics.

+19
May 21 '10 at 2:22 pm
source share

There are two different cases: integer exponents and fractional exponents.

For integer exponents, you can use exponentiation by square.

 def pow(base, exponent): if exponent == 0: return 1 elif exponent < 0: return 1 / pow(base, -exponent) elif exponent % 2 == 0: half_pow = pow(base, exponent // 2) return half_pow * half_pow else: return base * pow(base, exponent - 1) 

The second "elif" is what sets it apart from the naive pow function. This allows the function to make O (log n) recursive calls instead of O (n).

For fractional exponents, you can use the identity a ^ b = C ^ (b * log_C (a)). It is convenient to take C = 2, therefore a ^ b = 2 ^ (b * log2 (a)). This reduces the problem to writing functions for 2 ^ x and log2 (x).

The reason it’s convenient to accept C = 2 is because floating point numbers are stored in base-2 floating point. log2 (a * 2 ^ b) = log2 (a) + b. This makes it easy to write your function log2: you do not need to be exact for every positive number, only on the interval [1, 2]. Similarly, to calculate 2 ^ x, you can multiply 2 ^ (integer part of x) * 2 ^ (fractional part of x). The first part is trivial to store in a floating point number, for the second part you just need a 2 ^ x function over the interval [0, 1).

The hard part finds a good approximation of 2 ^ x and log2 (x). A simple approach is to use the Taylor series .

+9
May 23 '10 at 18:05
source share

A-priory:

a ^ b = exp (b ln (a))

where exp(x) = 1 + x + x^2/2 + x^3/3! + x^4/4! + x^5/5! + ... exp(x) = 1 + x + x^2/2 + x^3/3! + x^4/4! + x^5/5! + ...

where n! = 1 * 2 * ... * n n! = 1 * 2 * ... * n .

In practice, you can save an array of the first 10 1/n! values 1/n! and then get closer to

exp(x) = 1 + x + x^2/2 + x^3/3! + ... + x^10/10!

because 10! this is a huge amount, so 1/10! very small (2.7557319224⋅10 ^ -7).

+7
May 21 '10 at 2:54 pm
source share

Functions Wolfram provides a wide range of formulas for calculating power. Some of them would be very simple to implement.

+4
May 21 '10 at 14:13
source share

For positive integer degrees, see squared elevation and expansion in the additive chain .

+4
May 21 '10 at 2:22 pm
source share

Using the three user-implemented functions iPow(x, n) , Ln(x) and Exp(x) , I can calculate fPow(x, a) , x and a doubles . None of the functions below use library functions, but just an iteration.

Some explanation about the implemented functions:

(1) iPow(x, n) : x is double , n - int . This is a simple iteration, since n is an integer.

(2) Ln(x) : This function uses iteration of the Taylor series. The series used in the iteration is Σ (from int i = 0 to n) {(1 / (2 * i + 1)) * ((x - 1) / (x + 1)) ^ (2 * n + 1)} . The symbol ^ denotes the power function Pow(x, n) , implemented in the first function, which uses a simple iteration.

(3) Exp(x) : This function again uses the iteration of the Taylor series. The series used in the iteration is Σ (from int i = 0 to n) {x^i / i!} . Here ^ denotes a force function, but it is not calculated by calling function 1 Pow(x, n) ; instead, it is implemented within the framework of the third function, simultaneously with the factorial, using d *= x / i . I felt that I had to use this trick because iteration takes several steps in this function compared to other functions, and factorial (i!) Overflows most of the time. To make sure that the iteration is not crowded, the power function in this part is repeated simultaneously with the factorial. So I overcame the overflow.

(4) fPow(x, a) : x and a both double . This function does nothing, but simply calls the other three functions implemented above. The basic idea of ​​this function depends on some calculus: fPow(x, a) = Exp(a * Ln(x)) . And now I have all the functions of iPow , Ln and Exp with iteration already.

nb I used constant MAX_DELTA_DOUBLE to decide at what stage to stop the iteration. I set it to 1.0E-15 , which seems reasonable for the pair. So, the iteration stops if (delta < MAX_DELTA_DOUBLE) If you need higher precision, you can use long double and decrease the constant value for MAX_DELTA_DOUBLE , 1.0E-18 for example (1.0E-18 will be minimal).

Here is the code that works for me.

 #define MAX_DELTA_DOUBLE 1.0E-15 #define EULERS_NUMBER 2.718281828459045 double MathAbs_Double (double x) { return ((x >= 0) ? x : -x); } int MathAbs_Int (int x) { return ((x >= 0) ? x : -x); } double MathPow_Double_Int(double x, int n) { double ret; if ((x == 1.0) || (n == 1)) { ret = x; } else if (n < 0) { ret = 1.0 / MathPow_Double_Int(x, -n); } else { ret = 1.0; while (n--) { ret *= x; } } return (ret); } double MathLn_Double(double x) { double ret = 0.0, d; if (x > 0) { int n = 0; do { int a = 2 * n + 1; d = (1.0 / a) * MathPow_Double_Int((x - 1) / (x + 1), a); ret += d; n++; } while (MathAbs_Double(d) > MAX_DELTA_DOUBLE); } else { printf("\nerror: x < 0 in ln(x)\n"); exit(-1); } return (ret * 2); } double MathExp_Double(double x) { double ret; if (x == 1.0) { ret = EULERS_NUMBER; } else if (x < 0) { ret = 1.0 / MathExp_Double(-x); } else { int n = 2; double d; ret = 1.0 + x; do { d = x; for (int i = 2; i <= n; i++) { d *= x / i; } ret += d; n++; } while (d > MAX_DELTA_DOUBLE); } return (ret); } double MathPow_Double_Double(double x, double a) { double ret; if ((x == 1.0) || (a == 1.0)) { ret = x; } else if (a < 0) { ret = 1.0 / MathPow_Double_Double(x, -a); } else { ret = MathExp_Double(a * MathLn_Double(x)); } return (ret); } 
+2
Nov 03 '14 at 8:49
source share

This is an interesting exercise. Here are some suggestions you should try in this order:

  • Use a loop.
  • Use recursion (not better, but more interesting nonetheless)
  • Optimize Your Recursion Using Split and Conquest Methods
  • Use logarithms
+1
May 21 '10 at 2:03 pm
source share

The best algorithm for efficiently calculating positive integer powers is quadratically quadratic based, tracking additional multiple factors. Here is an example Python solution that should be relatively easy to understand and translated into your preferred language:

 def power(base, exponent): remaining_multiplicand = 1 result = base while exponent > 1: remainder = exponent % 2 if remainder > 0: remaining_multiplicand = remaining_multiplicand * result exponent = (exponent - remainder) / 2 result = result * result return result * remaining_multiplicand 

To handle negative metrics, all you have to do is calculate the positive version and divide 1 by the result, so this should be a simple modification of the above code. Fractional exponents are much more complicated, since this essentially means calculating the nth root of the base, where n = 1/abs(exponent % 1) and multiplying the result by calculating the power of the integer part:

 power(base, exponent - (exponent % 1)) 

You can calculate the roots to the desired level of accuracy using the Newton method. Check out the wikipedia article on the algorithm .

+1
May 21 '10 at 2:45 p.m.
source share

You can find the pow function as follows:

 static double pows (double p_nombre, double p_puissance) { double nombre = p_nombre; double i=0; for(i=0; i < (p_puissance-1);i++){ nombre = nombre * p_nombre; } return (nombre); } 

You can find the floor function as follows:

 static double floors(double p_nomber) { double x = p_nomber; long partent = (long) x; if (x<0) { return (partent-1); } else { return (partent); } } 

Regards

+1
Dec 16 '15 at 7:51
source share

I use a fixed point of long arithmetic, and my pow is log2 / exp2. Numbers consist of:

  • int sig = { -1; +1 } int sig = { -1; +1 } signum
  • DWORD a[A+B] number
  • A is the DWORD number for the integer part of the number
  • B - DWORD number for the fractional part

My simplified solution is this:

 //--------------------------------------------------------------------------- longnum exp2 (const longnum &x) { int i,j; longnum c,d; c.one(); if (x.iszero()) return c; i=x.bits()-1; for(d=2,j=_longnum_bits_b;j<=i;j++,d*=d) if (x.bitget(j)) c*=d; for(i=0,j=_longnum_bits_b-1;i<_longnum_bits_b;j--,i++) if (x.bitget(j)) c*=_longnum_log2[i]; if (x.sig<0) {d.one(); c=d/c;} return c; } //--------------------------------------------------------------------------- longnum log2 (const longnum &x) { int i,j; longnum c,d,dd,e,xx; c.zero(); d.one(); e.zero(); xx=x; if (xx.iszero()) return c; //**** error: log2(0) = infinity if (xx.sig<0) return c; //**** error: log2(negative x) ... no result possible if (d.geq(x,d)==0) {xx=d/xx; xx.sig=-1;} i=xx.bits()-1; e.bitset(i); i-=_longnum_bits_b; for (;i>0;i--,e>>=1) // integer part { dd=d*e; j=dd.geq(dd,xx); if (j==1) continue; // dd> xx c+=i; d=dd; if (j==2) break; // dd==xx } for (i=0;i<_longnum_bits_b;i++) // fractional part { dd=d*_longnum_log2[i]; j=dd.geq(dd,xx); if (j==1) continue; // dd> xx c.bitset(_longnum_bits_b-i-1); d=dd; if (j==2) break; // dd==xx } c.sig=xx.sig; c.iszero(); return c; } //--------------------------------------------------------------------------- longnum pow (const longnum &x,const longnum &y) { //x^y = exp2(y*log2(x)) int ssig=+1; longnum c; c=x; if (y.iszero()) {c.one(); return c;} // ?^0=1 if (c.iszero()) return c; // 0^?=0 if (c.sig<0) { c.overflow(); c.sig=+1; if (y.isreal()) {c.zero(); return c;} //**** error: negative x ^ noninteger y if (y.bitget(_longnum_bits_b)) ssig=-1; } c=exp2(log2(c)*y); c.sig=ssig; c.iszero(); return c; } //--------------------------------------------------------------------------- 

Where:

 _longnum_bits_a = A*32 _longnum_bits_b = B*32 _longnum_log2[i] = 2 ^ (1/(2^i)) ... precomputed sqrt table _longnum_log2[0]=sqrt(2) _longnum_log2[1]=sqrt[tab[0]) _longnum_log2[i]=sqrt(tab[i-1]) longnum::zero() sets *this=0 longnum::one() sets *this=+1 bool longnum::iszero() returns (*this==0) bool longnum::isnonzero() returns (*this!=0) bool longnum::isreal() returns (true if fractional part !=0) bool longnum::isinteger() returns (true if fractional part ==0) int longnum::bits() return num of used bits in number counted from LSB longnum::bitget()/bitset()/bitres()/bitxor() are bit access longnum.overflow() rounds number if there was a overflow X.FFFFFFFFFF...FFFFFFFFF??h -> (X+1).0000000000000...000000000h int longnum::geq(x,y) is comparition |x|,|y| returns 0,1,2 for (<,>,==) 

All you need to understand this code is that the numbers in binary form consist of a sum of powers of 2, when you need to calculate 2 ^ num, then it can be rewritten like this

  • 2^(b(-n)*2^(-n) + ... + b(+m)*2^(+m))

where n are fractional bits and m are integer bits. multiplying / dividing by 2 in binary form is a simple bit shift, so if you combine all this, you will get a code for exp2 similar to mine. log2 based on binar search ... changing the result bits from MSB to LSB until it matches the found value (a very similar algorithm, as for quick sqrt calculation). I hope this helps clarify the situation ...

0
Aug 11 '13 at 7:26
source share

Other answers give many approaches. This is what I think may be useful in the case of integral powers.

In the case of integer power x n x , a simple approach would take multiplications x-1. To optimize this, we can use dynamic programming and reuse of an earlier multiplication result to avoid all x multiplications. For example, in 5 9 we can, for example, make parties 3 , i.e. Calculate 5 3 once, get 125, and then cube 125 using the same logic, using only 4 multiplications in the process, instead of 8 multiplications in a simple way.

The question is what is the ideal batch size so that the number of multiplications is minimal. So let's write an equation for this. If f (x, b) is a function representing the number of multiplications associated with computing n x using the above method, then

> f (x, b) = (x / b - 1) + (b -1)

Explanation: The product of a batch of numbers p will take multiplications p-1. If we divide the multiplications of x into b lots, each lot will require (x / b) -1 multiplications, and b-1 multiplications are required for all b parties.

Now we can calculate the first derivative of this function with respect to b and equate it to 0 to get b for the least number of multiplications.

f '(x, b) = -x / b <sup> 2 </sup>; + 1 = 0

enter image description here

Now return this b value to f (x, b) to get the least number of multiplications:

enter image description here

For all positive x, this value is less than multiplication in a simple way.

0
Aug 6 '16 at 8:46
source share



All Articles