I use the following function to concatenate a large number of CSV files:
def concatenate():
files = sort()
merged = pd.DataFrame()
for file in files:
print "concatinating" + file
if file.endswith('FulltimeSimpleOpt.csv'):
filenamearray = file.split("_")
f = pd.read_csv(file, index_col=0)
f.loc[:,'Vehicle'] = filenamearray[0].replace("veh", "")
f.loc[:,'Year'] = filenamearray[1].replace("year", "")
if "timelimit" in file:
f.loc[:,'Timelimit'] = "1"
else:
f.loc[:,'Timelimit'] = "0"
merged = pd.concat([merged, f], axis=0)
merged.to_csv('merged.csv')
The problem with this function is that it does not process a large number of files (30,000). I tried using a sample of 100 files that end up properly. However, for 30,000 files, the script slows down and crashes at some point.
How can I handle a large number of files better in Python Pandas?
source
share