Given this data format:
In [1]: df = pd.DataFrame(np.random.rand(4,4),
index=['A','B','C','All'],
columns=[2011,2012,2013,'All']).round(2)
print(df)
Out[1]:
2011 2012 2013 All
A 0.94 0.17 0.06 0.64
B 0.49 0.16 0.43 0.64
C 0.16 0.20 0.22 0.37
All 0.94 0.04 0.72 0.18
I am trying to use pd.styleto format data output. One keyword subsetwhere you determine where to apply formatting rules (for example: highlight the maximum). The documentation for pd.style states that it’s better to use pd.IndexSlice:
The value passed in subsetbehaves similarly to slicing a DataFrame.
- A scalar is treated as a column label.
- List (or series or numpy array)
- A tuple is considered as (row_indexer, column_indexer)
Consider using pd.IndexSliceto build a tuple for the latter.
I am trying to understand why in some cases this fails.
, , , , .
IndexSlice :
In [2]: df.ix[pd.IndexSlice[1:-1,:-1]]
Out[2]:
2011 2012 2013
B 0.49 0.16 0.43
C 0.16 0.20 0.22
style.bar, :
In [3]: df.style.bar(subset=pd.IndexSlice[1:-1,:-1], color='
TypeError: cannot do slice indexing on <class 'pandas.indexes.base.Index'>
with these indexers [1] of <class 'int'>
, :
In [4]: df.style.bar(subset=pd.IndexSlice[df.index[1:-1],df.columns[:-1]],
color='#d65f5f')

, . , pd.IndexSlice, , , - . pd.style ( , 0.17.1).
- , ?