Pandas - Explanation Regarding Slow Application Function

The Apply function works very slowly with a large data framework (about 1 ~ 3 million rows).

I checked related questions here, like Speed ​​up Pandas to apply a function , and Calculate within Pandas apply () , it seems that the best way to speed it up is not to use the apply function :)

In my case, I have two kinds of tasks related to the apply function.

First: apply with a search query dict

f(p_id, p_dict):
    return p_dict[p_dict['ID'] == p_id]['value']

p_dict = DataFrame(...)  # it another dict works like lookup table
df = df.apply(f, args=(p_dict,))

Second: apply with groupby

f(week_id, min_week_num, p_dict):
    return p_dict[(week_id - min_week_num < p_dict['WEEK']) & (p_dict['WEEK'] < week_id)].ix[:,2].mean()

f_partial = partial(f, min_week_num=min_week_num, p_dict=p_dict)
df = map(f, df['WEEK'])

I assume that for the first case, this can be done with a dataframe join, while I'm not sure about the cost of resources for such a join in a large data set.

My question is:

  • Is there a way to replace the application in the two above cases?
  • ? , , O (N), , N 1 .
+4
1

, , . , apply . , apply Series DataFrame, DataFrame IO ( , 100% , Pandas ).

, "" df, p_dict . 1000 pd.merge:

import string, sys

import numpy as np
import pandas as pd

##
# Part 1 - filling a column by a lookup table
##
def f1(col, p_dict):
    return [p_dict[p_dict['ID'] == s]['value'].values[0] for s in col]

# Testing
n_size = 1000
np.random.seed(997)
p_dict = pd.DataFrame({'ID': [s for s in string.ascii_uppercase], 'value': np.random.randint(0,n_size, 26)})
df = pd.DataFrame({'p_id': [string.ascii_uppercase[i] for i in np.random.randint(0,26, n_size)]})

# Apply the f1 method  as posted
%timeit -n1 -r5 temp = df.apply(f1, args=(p_dict,))
>>> 1 loops, best of 5: 832 ms per loop

# Using merge
np.random.seed(997)
df = pd.DataFrame({'p_id': [string.ascii_uppercase[i] for i in np.random.randint(0,26, n_size)]})
%timeit -n1 -r5 temp = pd.merge(df, p_dict, how='inner', left_on='p_id', right_on='ID', copy=False)

>>> 1000 loops, best of 5: 826 µs per loop

, p_dict, , min_week_num p_dict. , p_dict WEEK. pd.merge.

, min_week_num 0 . rolling_growing_mean, . rolling_growing_mean O (n), .

n_size = 1000
np.random.seed(997)
p_dict = pd.DataFrame({'WEEK': range(52), 'value': np.random.randint(0, 1000, 52)})
df = pd.DataFrame({'WEEK': np.random.randint(0, 52, n_size)})

def rolling_growing_mean(values):
    out = np.empty(len(values))
    out[0] = values[0]
    # Time window for taking mean grows each step
    for i, v in enumerate(values[1:]):
        out[i+1] = np.true_divide(out[i]*(i+1) + v, i+2)
    return out

p_dict['Means'] = rolling_growing_mean(p_dict['value'])

df_merged = pd.merge(df, p_dict, how='inner', left_on='WEEK', right_on='WEEK')
+3

Source: https://habr.com/ru/post/1649777/


All Articles