Pandas group by multiple columns

I have a dataframe in which I search for a group and then break the values โ€‹โ€‹inside the group into several columns.

For example: let's say I have the following data framework:

>>> import pandas as pd >>> import numpy as np >>> df=pd.DataFrame() >>> df['Group']=['A','C','B','A','C','C'] >>> df['ID']=[1,2,3,4,5,6] >>> df['Value']=np.random.randint(1,100,6) >>> df Group ID Value 0 A 1 66 1 C 2 2 2 B 3 98 3 A 4 90 4 C 5 85 5 C 6 38 >>> 

I want to group the "Group" field, get the sum of the "Value" field and get new fields, each of which contains the values โ€‹โ€‹of the group identifier.

Currently, I can do this as follows, but I'm looking for a cleaner methodology:

First, I create a dataframe with a list of identifiers in each group.

 >>> g=df.groupby('Group') >>> result=g.agg({'Value':np.sum, 'ID':lambda x:x.tolist()}) >>> result ID Value Group A [1, 4] 98 B [3] 76 C [2, 5, 6] 204 >>> 

And then I use pd.Series to split them into columns, rename them and then join it.

 >>> id_df=result.ID.apply(lambda x:pd.Series(x)) >>> id_cols=['ID'+str(x) for x in range(1,len(id_df.columns)+1)] >>> id_df.columns=id_cols >>> >>> result.join(id_df)[id_cols+['Value']] ID1 ID2 ID3 Value Group A 1 4 NaN 98 B 3 NaN NaN 76 C 2 5 6 204 >>> 

Is there a way to do this without first creating a list of values?

+6
source share
3 answers

You could use

 id_df = grouped['ID'].apply(lambda x: pd.Series(x.values)).unstack() 

create id_df without intermediate result DataFrame.


 import pandas as pd import numpy as np np.random.seed(2016) df = pd.DataFrame({'Group': ['A', 'C', 'B', 'A', 'C', 'C'], 'ID': [1, 2, 3, 4, 5, 6], 'Value': np.random.randint(1, 100, 6)}) grouped = df.groupby('Group') values = grouped['Value'].agg('sum') id_df = grouped['ID'].apply(lambda x: pd.Series(x.values)).unstack() id_df = id_df.rename(columns={i: 'ID{}'.format(i + 1) for i in range(id_df.shape[1])}) result = pd.concat([id_df, values], axis=1) print(result) 

profitability

  ID1 ID2 ID3 Value Group A 1 4 NaN 77 B 3 NaN NaN 84 C 2 5 6 86 
+9
source

Using get_dummies and MultiLabelBinarizer (scikit-learn):

 import pandas as pd import numpy as np from sklearn import preprocessing df = pd.DataFrame() df['Group']=['A','C','B','A','C','C'] df['ID']=[1,2,3,4,5,6] df['Value']=np.random.randint(1,100,6) mlb = preprocessing.MultiLabelBinarizer(classes=classes).fit([]) df2 = pd.get_dummies(df, '', '', columns=['ID']).groupby(by='Group').sum() df3 = pd.DataFrame(mlb.inverse_transform(df2[df['ID'].unique()].values), index=df2.index) df3.columns = ['ID' + str(x + 1) for x in range(df3.shape[0])] pd.concat([df3, df2['Value']], axis=1) ID1 ID2 ID3 Value Group A 1 4 NaN 63 B 3 NaN NaN 59 C 2 5 6 230 
0
source

Another way to do this is to first add a helper column to your data, and then rotate your data frame using the helper column, in the case below ID_Count:

Using the @unutbu setting:

 import pandas as pd import numpy as np np.random.seed(2016) df = pd.DataFrame({'Group': ['A', 'C', 'B', 'A', 'C', 'C'], 'ID': [1, 2, 3, 4, 5, 6], 'Value': np.random.randint(1, 100, 6)}) #Create group grp = df.groupby('Group') #Create helper column df['ID_Count'] = grp['ID'].cumcount() + 1 #Pivot dataframe using helper column and add 'Value' column to pivoted output. df_out = df.pivot('Group','ID_Count','ID').add_prefix('ID').assign(Value = grp['Value'].sum()) 

Output:

 ID_Count ID1 ID2 ID3 Value Group A 1.0 4.0 NaN 77 B 3.0 NaN NaN 84 C 2.0 5.0 6.0 86 
0
source

Source: https://habr.com/ru/post/1241552/


All Articles