You can use boolean indexingwith value_counts:
print (df.ix[df.Killed.isnull(), 'Type'].value_counts().reset_index(name='Sum(isnull)'))
index Sum(isnull)
0 Dog 2
1 Cow 1
2 Cat 1
Or an aggregate size, it looks faster:
print (df[df.Killed.isnull()]
.groupby('Type')['Killed']
.size()
.reset_index(name='Sum(isnull)'))
Type Sum(isnull)
0 Cat 1
1 Cow 1
2 Dog 2
Delay
df = pd.concat([df]*1000).reset_index(drop=True)
In [30]: %timeit (df.ix[df.Killed.isnull(), 'Type'].value_counts().reset_index(name='Sum(isnull)'))
100 loops, best of 3: 5.36 ms per loop
In [31]: %timeit (df[df.Killed.isnull()].groupby('Type')['Killed'].size().reset_index(name='Sum(isnull)'))
100 loops, best of 3: 2.02 ms per loop
source
share