Interpolation in the sense of increasing the sampling rate of a signal
... or I call it "upsampling" (the wrong term, perhaps a disclaimer: I did not read "Lyon"). I just needed to understand what the code was doing and then rewrite it for readability. Since this has a couple of problems:
a) it is inefficient - two loops in order, but this is a multiplication for each individual output element; also it uses intermediate lists ( hold ), generates the result with append (small beer)
b) it interpolates an incorrect first interval; it generates fake data before the first element. Let's say we have a factor = 5 and seq = [20,30] - it will generate [0,4,8,12,16,20,22,24,28,30] instead of [20,22,24,26, 28.30].
So, here is the algorithm in the form of a generator:
def upsampler(seq, multiplier): if seq: step = 1.0 / multiplier y0 = seq[0]; yield y0 for y in seq[1:]: dY = (y-y0) * step for i in range(multiplier-1): y0 += dY; yield y0 y0 = y; yield y0
Ok, and now for some tests:
>>> list(upsampler([], 3)) , 0.58778525229247325, 1.2246063538223773e- >>> list(upsampler([], 3)) , -2.4492127076447545e- >>> list(upsampler([], 3)) , 3.6738190614671318e- >>> list(upsampler([], 3))
And here is my translation into C, fits into the Kratz fn template:
float* linearInterpolation(float* src, int src_len, int steps, float* dst) { float step, y0, dy; float *src_end; if (src_len > 0) { step = 1.0 / steps; for (src_end = src+src_len; *dst++ = y0 = *src++, src < src_end; ) { dY = (*src - y0) * step; for (int i=steps; i>0; i--) { *dst++ = y0 += dY; } } } }
Note that C snippet is “typed, but never compiled or launched,” so syntax errors, one-by-one errors, etc. may occur. But in general, there is an idea.