How to group statistics by words in pandas dataframe

I want to do aggregation in panda of data by word.

Basically there are 3 columns with the calculation of clicks and impressions with the corresponding phrase. I would like to divide the phrase into tokens and then bring their clicks to the tokens in order to decide which token is relatively good / bad.

Expected input: panda dataframe as below

   click_count  impression_count    text
1   10          100                 pizza
2   20          200                 pizza italian
3   1           1                   italian cheese

Expected Result:

   click_count  impression_count  token
1   30         300                pizza      // 30 = 20 + 10, 300 = 200+100        
2   21         201                italian    // 21 = 20 + 1
3   1           1                 cheese     // cheese only appeared once in italian cheese
+4
source share
3 answers
tokens = df.text.str.split(expand=True)
token_cols = ['token_{}'.format(i) for i in range(tokens.shape[1])]
tokens.columns = token_cols

df1 = pd.concat([df.drop('text', axis=1), tokens], axis=1)
df1

enter image description here

df2 = pd.lreshape(df1, {'tokens': token_cols})
df2

enter image description here

df2.groupby('tokens').sum()

enter image description here

+1
source

This creates a new DataFrame, such as piRSquared, but the tokens are folded and combined with the original:

(df['text'].str.split(expand=True).stack().reset_index(level=1, drop=True)
           .to_frame('token').merge(df, left_index=True, right_index=True)
           .groupby('token')['click_count', 'impression_count'].sum())
Out: 
         click_count  impression_count
token                                 
cheese             1                 1
italian           21               201
pizza             30               300

If you break this, it combines this:

df['text'].str.split(expand=True).stack().reset_index(level=1, drop=True).to_frame('token')
Out: 
     token
1    pizza
2    pizza
2  italian
3  italian
3   cheese

with the original DataFrame by their indices. As a result of df:

(df['text'].str.split(expand=True).stack().reset_index(level=1, drop=True)
           .to_frame('token').merge(df, left_index=True, right_index=True))
Out: 
     token  click_count  impression_count            text
1    pizza           10               100           pizza
2    pizza           20               200   pizza italian
2  italian           20               200   pizza italian
3  italian            1                 1  italian cheese
3   cheese            1                 1  italian cheese

The rest is grouped by marker column.

+1

In [3091]: s = df.text.str.split(expand=True).stack().reset_index(drop=True, level=-1)

In [3092]: df.loc[s.index].assign(token=s).groupby('token',sort=False,as_index=False).sum()
Out[3092]:
     token  click_count  impression_count
0    pizza           30               300
1  italian           21               201
2   cheese            1                 1

In [3093]: df
Out[3093]:
   click_count  impression_count            text
1           10               100           pizza
2           20               200   pizza italian
3            1                 1  italian cheese

In [3094]: s
Out[3094]:
1      pizza
2      pizza
2    italian
3    italian
3     cheese
dtype: object
0

Source: https://habr.com/ru/post/1653048/


All Articles