I have a pretty big pandas dataframe (1.5gig.csv on disk). I can load it into memory and request it. I want to create a new column that combines the value of two other columns, and I tried this:
def combined(row):
row['combined'] = row['col1'].join(str(row['col2']))
return row
df = df.apply(combined, axis=1)
This leads to my python process killing, presumably due to memory issues.
A more iterative solution to the problem is as follows:
df['combined'] = ''
col_pos = list(df.columns).index('combined')
crs_pos = list(df.columns).index('col1')
sub_pos = list(df.columns).index('col2')
for row_pos in range(0, len(df) - 1):
df.iloc[row_pos, col_pos] = df.iloc[row_pos, sub_pos].join(str(df.iloc[row_pos, crs_pos]))
This, of course, seems very unpleasant. And very slowly.
- apply_chunk(), , , . , dask , dask dataframes, , , . , , , pandas ?