Using np.where is faster. Using a similar pattern used with replace :
df['col1'] = np.where(df['col1'] == 0, df['col2'], df['col1']) df['col1'] = np.where(df['col1'] == 0, df['col3'], df['col1'])
However, using nested np.where is slightly faster:
df['col1'] = np.where(df['col1'] == 0, np.where(df['col2'] == 0, df['col3'], df['col2']), df['col1'])
Delay
Using the following setting to create a larger selection of DataFrame and synchronization functions:
df = pd.concat([df]*10**4, ignore_index=True) def root_nested(df): df['col1'] = np.where(df['col1'] == 0, np.where(df['col2'] == 0, df['col3'], df['col2']), df['col1']) return df def root_split(df): df['col1'] = np.where(df['col1'] == 0, df['col2'], df['col1']) df['col1'] = np.where(df['col1'] == 0, df['col3'], df['col1']) return df def pir2(df): df['col1'] = df.where(df.ne(0), np.nan).bfill(axis=1).col1.fillna(0) return df def pir2_2(df): slc = (df.values != 0).argmax(axis=1) return df.values[np.arange(slc.shape[0]), slc] def andrew(df): df.col1[df.col1 == 0] = df.col2 df.col1[df.col1 == 0] = df.col3 return df def pablo(df): df['col1'] = df['col1'].replace(0,df['col2']) df['col1'] = df['col1'].replace(0,df['col3']) return df
I get the following timings:
%timeit root_nested(df.copy()) 100 loops, best of 3: 2.25 ms per loop %timeit root_split(df.copy()) 100 loops, best of 3: 2.62 ms per loop %timeit pir2(df.copy()) 100 loops, best of 3: 6.25 ms per loop %timeit pir2_2(df.copy()) 1 loop, best of 3: 2.4 ms per loop %timeit andrew(df.copy()) 100 loops, best of 3: 8.55 ms per loop
I tried to synchronize my method, but it worked for several minutes without completion. For comparison, the timing of your method using only the 6-line DataFrame example (not much larger than that above) took 12.8 ms.