Pandas: exponentially decreasing sum with variable weights

Similar to this Exponential Decay question in Python Pandas DataFrame , I would like to quickly calculate the exponentially decaying sums for some columns in a data frame. However, the rows in the data frame are not evenly distributed over time. Therefore, while the exponential_sum[i] = column_to_sum[i] + np.exp(-const*(time[i]-time[i-1])) * exponential_sum[i-1]weight is np.exp(...)not taken into account, and it does not seem obvious to me how to change this question and still use pandas / numpy vectorization. Is there a Pandas vectorized solution to this problem?

To illustrate the required calculation, here is an example of a frame with an exponential moving sum Astored in Sum, using decay constant 1:

    time  A       Sum
0   1.00  1  1.000000
1   2.10  3  3.332871
2   2.13 -1  2.234370
3   3.70  7  7.464850
4  10.00  2  2.013708
5  10.20  1  2.648684
+4
source share
2 answers

This question is more complex than it first appeared. I ended up using numba jit to compile a generator function to calculate exponential sums. My end result calculates an exponential amount of 5 million rows per second on my computer, which I hope will be fast enough for your needs.

# Initial dataframe.
df = pd.DataFrame({'time': [1, 2.1, 2.13, 3.7, 10, 10.2], 
                   'A': [1, 3, -1, 7, 2, 1]})

# Initial decay parameter.
decay_constant = 1

We can define the decay weights as exp (-time_delta * decay_constant) and set its initial value to unity:

df['weight'] = np.exp(-df.time.diff() * decay_constant)
df.weight.iat[0] = 1

>>> df
   A   time    weight
0  1   1.00  1.000000
1  3   2.10  0.332871
2 -1   2.13  0.970446
3  7   3.70  0.208045
4  2  10.00  0.001836
5  1  10.20  0.818731

Now we will use jit from numba to optimize the generator function, which calculates exponential sums:

from numba import jit

@jit(nopython=True)
def exponential_sum(A, k):
    total = A[0]
    yield total
    for i in xrange(1, len(A)):  # Use range in Python 3.
        total = total * k[i] + A[i]
        yield total

We will use the generator to add values ​​to the dataframe:

df['expSum'] = list(exponential_sum(df.A.values, df.weight.values))

:

>>> df
   A   time    weight    expSum
0  1   1.00  1.000000  1.000000
1  3   2.10  0.332871  3.332871
2 -1   2.13  0.970446  2.234370
3  7   3.70  0.208045  7.464850
4  2  10.00  0.001836  2.013708
5  1  10.20  0.818731  2.648684

, 5 :

df = pd.DataFrame({'time': np.random.rand(5e6).cumsum(), 'A': np.random.randint(1, 10, 5e6)})
df['weight'] = np.exp(-df.time.diff() * decay_constant)
df.weight.iat[0] = 1

%%timeit -n 10 
df['expSum'] = list(exponential_sum(df.A.values, df.weight.values))
10 loops, best of 3: 726 ms per loop
+4

, , .

-, , :

exponential_sum[i] = column_to_sum[i] + 
    np.exp(-const*(time[i]-time[i-1])) * column_to_sum[i-1] + 
    np.exp(-const*(time[i]-time[i-2])) * column_to_sum[i-2] + ...

, . :

time = pd.Series(np.random.rand(10)).cumsum()
weightspace = np.empty((10,10))
for i in range(len(time)):
    weightspace[i] = time - time[i]
weightspace = np.exp(weightspace)

, . , .

, :

def rollingsum(array):
    weights = weightspace[len(array)-1][:len(array)]
    # Convolve the array and the weights to obtain the result
    a = np.dot(array, weights).sum()
    return a

:

dataset = pd.DataFrame(np.random.rand(10,3), columns=["A", "B","C"])
a = pd.expanding_apply(dataset, rollingsum)
0

Source: https://habr.com/ru/post/1612868/


All Articles