Let's say I have a user activity log and I want to generate a report of the total duration and the number of unique users per day.
import numpy as np import pandas as pd df = pd.DataFrame({'date': ['2013-04-01','2013-04-01','2013-04-01','2013-04-02', '2013-04-02'], 'user_id': ['0001', '0001', '0002', '0002', '0002'], 'duration': [30, 15, 20, 15, 30]})
The duration of the aggregation is quite simple:
group = df.groupby('date') agg = group.aggregate({'duration': np.sum}) agg duration date 2013-04-01 65 2013-04-02 45
What I would like to do is to sum the duration and the number of matches at the same time, but I cannot find the equivalent for count_distinct:
agg = group.aggregate({ 'duration': np.sum, 'user_id': count_distinct})
It works, but of course the best way, no?
group = df.groupby('date') agg = group.aggregate({'duration': np.sum}) agg['uv'] = df.groupby('date').user_id.nunique() agg duration uv date 2013-04-01 65 2 2013-04-02 45 1
I think I just need to provide a function that returns the number of individual elements of a Series object in an aggregated function, but I do not have a large number of different libraries at my disposal. Also, it seems that the groupby object already knows this information, so I just wouldnβt duplicate the efforts?
python pandas
dave 01 Sep '13 at 3:25 2013-09-01 03:25
source share