Pandas - Sequential Range Group

I have a dataframe with the following structure - Start, End and Height.

Some properties of the data frame:

  • A line in a data frame always starts from where the previous line ended, that is, if the end for line n is 100, then the beginning of line n + 1 is 101.
  • The height of the row n + 1 is always different from the height in the row n + 1 (this is the reason that the data is in different rows).

I would like to group the dataframe so that the heights are grouped into buckets of 5 lengths, i.e. buckets - 0, 1-5, 6-10, 11-15 and> 15.

See the code example below where what I'm looking for is an implementation of the group_by_bucket function.

I tried to look at other questions, but could not get an exact answer to what I was looking for.

Thanks in advance!

>>> d = pd.DataFrame([[1,3,5], [4,10,7], [11,17,6], [18,26, 12], [27,30, 15], [31,40,6], [41, 42, 7]], columns=['start','end', 'height']) >>> d start end height 0 1 3 8 1 4 10 7 2 11 17 6 3 18 26 12 4 27 30 15 5 31 40 6 6 41 42 7 >>> d_gb = group_by_bucket(d) >>> d_gb start end height_grouped 0 1 17 6_10 1 18 30 11_15 2 31 42 6_10 
+7
source share
2 answers

The way to do this is:

 df = pd.DataFrame([[1,3,10], [4,10,7], [11,17,6], [18,26, 12], [27,30, 15], [31,40,6], [41, 42, 6]], columns=['start','end', 'height']) 

Use cut to create groups:

 df['groups']=pd.cut(df.height,[-1,0,5,10,15,1000]) 

Find breakpoints:

 df['categories']=(df.groups!=df.groups.shift()).cumsum() 

Then df :

 """ start end height groups categories 0 1 3 10 (5, 10] 0 1 4 10 7 (5, 10] 0 2 11 17 6 (5, 10] 0 3 18 26 12 (10, 15] 1 4 27 30 15 (10, 15] 1 5 31 40 6 (5, 10] 2 6 41 42 6 (5, 10] 2 """ 

Define interesting data:

 f = {'start':['first'],'end':['last'], 'groups':['first']} 

And use the groupby.agg function:

 df.groupby('categories').agg(f) """ groups end start first last first categories 0 (5, 10] 17 1 1 (10, 15] 30 18 2 (5, 10] 42 31 """ 
+6
source

You can use cut with groupby cut and Series with cumsum to generate groups and aggregate agg , first and last :

 bins = [-1,0,1,5,10,15,100] print bins [-1, 0, 1, 5, 10, 15, 100] cut_ser = pd.cut(d['height'], bins=bins) print cut_ser 0 (5, 10] 1 (5, 10] 2 (5, 10] 3 (10, 15] 4 (10, 15] 5 (5, 10] 6 (5, 10] Name: height, dtype: category Categories (6, object): [(-1, 0] < (0, 1] < (1, 5] < (5, 10] < (10, 15] < (15, 100]] print (cut_ser.shift() != cut_ser).cumsum() 0 0 1 0 2 0 3 1 4 1 5 2 6 2 Name: height, dtype: int32 print d.groupby([(cut_ser.shift() != cut_ser).cumsum(), cut_ser]) .agg({'start' : 'first','end' : 'last'}) .reset_index(level=1).reset_index(drop=True) .rename(columns={'height':'height_grouped'}) height_grouped start end 0 (5, 10] 1 17 1 (10, 15] 18 30 2 (5, 10] 31 42 

EDIT:

Delay

 In [307]: %timeit a(df) 100 loops, best of 3: 5.45 ms per loop In [308]: %timeit b(d) The slowest run took 4.45 times longer than the fastest. This could mean that an intermediate result is being cached 100 loops, best of 3: 3.28 ms per loop 

Code

 d = pd.DataFrame([[1,3,5], [4,10,7], [11,17,6], [18,26, 12], [27,30, 15], [31,40,6], [41, 42, 7]], columns=['start','end', 'height']) print d df = d.copy() def a(df): df['groups']=pd.cut(df.height,[-1,0,5,10,15,1000]) df['categories']=(df.groups!=df.groups.shift()).cumsum() f = {'start':['first'],'end':['last'], 'groups':['first']} return df.groupby('categories').agg(f) def b(d): bins = [-1,0,1,5,10,15,100] cut_ser = pd.cut(d['height'], bins=bins) return d.groupby([(cut_ser.shift() != cut_ser).cumsum(), cut_ser]).agg({'start' : 'first','end' : 'last'}).reset_index(level=1).reset_index(drop=True).rename(columns={'height':'height_grouped'}) print a(df) print b(d) 
+3
source

Source: https://habr.com/ru/post/1247850/


All Articles