Effective implementation of the natural logarithm (ln) and exponentiation

Basically, I'm looking for an implementation of the log() and exp() functions provided in the C <math.h> library. I work with 8-bit microcontrollers (OKI 411 and 431). I need to calculate the average kinetic temperature . The requirement is that we must be able to calculate MKT as quickly as possible and with minimal code memory. The compiler has the functions log() and exp() in <math.h> . But calling the function and connecting to the library leads to an increase in code size by 5 kilobytes, which does not fit in one of the micro we work with (OKI 411), because our code already consumed ~ 12 thousand. The available code memory is about 15 KB.

The implementation I'm looking for should not use any other C library functions (such as pow (), sqrt (), etc.). This is due to the fact that all library functions are packaged into one library, and even if one function is called, the linker will bring a whole 5K library to the code memory.

EDIT

The algorithm must be valid up to three decimal places.

+4
source share
5 answers

The Taylor series for e ^ x converges very quickly, and you can adjust your implementation to the accuracy you need. ( http://en.wikipedia.org/wiki/Taylor_series )

The taylor series for the magazine is not so good ... do you have a certain part of the domain for which you need to implement a function?

+6
source

Does the base table work with interpolation between value approaches? If the ranges of values ​​are limited (which is probably for your case - I doubt that the temperature readings have a huge range), and high requirements are not required, it can work. It should be easily tested on a regular machine.

Here is one of many topics in the presentation of function tables: Computing vs. lookup tables for sinusoidal performance?

+4
source

If you don't need floating point math for anything else, you can easily calculate an approximate fractional back-2 log. Start by moving the value to 32768 or higher and save the number of times you did in count . Then repeat several times (depending on the desired scale factor):

 n = (mult(n,n) + 32768u) >> 16; // If a function is available for 16x16->32 multiply count<<=1; if (n < 32768) n*=2; else count+=1; 

If the above cycle is repeated 8 times, then the base base 2 numbers will be considered / 256. If ten times, count / 1024. If eleven, the score is / 2048. This function works effectively by calculating the integer logarithm of the power of two numbers n ** (2 ^ reps), but with intermediate values ​​scaled to avoid overflow.

+3
source

Using the Taylor series is not the easiest and fastest way to do this. In most professional implementations, approximating polynomials are used. I will show you how to generate it in Maple (this is a computer algebra program) using the Remez algorithm .

For 3 digits of precision, run the following commands in Maple:

 with(numapprox): Digits := 8 minimax(ln(x), x = 1 .. 2, 4, 1, 'maxerror') maxerror 

His answer is the following polynomial:

 -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x 

With the maximum error: 0.000061011436

Craft function approximation

We have generated a polynomial approximating ln (x), but only inside the interval [1..2]. Increasing the interval is not reasonable because it will increase the maximum error even more. Instead, do the following decomposition:

formula

So, first find the highest power 2, which is still less than the number (see What is the fastest / most efficient way to find the most significant bit (msb) to an integer in C? ). This number is actually the base-2 logarithm. Divide this value, then the result falls into the interval 1..2. In the end, we will need to add n * ln (2) to get the final result.

An example implementation for numbers> = 1:

 float ln(float y) { int log2; float divisor, x, result; log2 = msb((int)y); // See: https://stackoverflow.com/a/4970859/6630230 divisor = (float)(1 << log2); x = y / divisor; // normalized value between [1.0, 2.0] result = -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x; result += ((float)log2) * 0.69314718; // ln(2) = 0.69314718 return result; } 

Although if you plan to use it only in the interval [1.0, 2.0], then the function will look like this:

 float ln(float x) { return -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x; } 
+3
source

In addition to the Crocting Kitten, which gave me inspiration, you can build a pseudo-recursive logarithm (no more than 1 self-start) to avoid using polynomials. In pseudo code

 ln(x) := If (x <= 0) return NaN Else if (!(1 <= x < 2)) return LN2 * b + ln(a) Else return taylor_expansion(x - 1) 

This is quite effective and accurate, since on [1; 2) the Taylor series converges faster on L, and we get such a number 1 <= a <2 with the first call to ln, if our input is positive, but not in this range.

You can find β€œb” as your objective exponent from the data stored in float x, and β€œa” from the mantissa of float x (a is exactly the same value as x, but now with exponent biased_0, not exponent biased_b ) LN2 should be stored as a macro in the IMO floating point hexadecimal system. You can also use http://man7.org/linux/man-pages/man3/frexp.3.html .

Also the trick

 unsigned long tmp = *(ulong*)(&d); 

for "memory-casting" double to unsigned long, rather than "value-casting", it is very useful to know when working with memory floats, since bitwise operators will cause warnings or errors depending on the compiler.

+1
source

Source: https://habr.com/ru/post/1402585/


All Articles