Batch convert .dbf to .csv in Python

I have ~ 300 folders with .dbf files that I would like to convert to CSV files.

I use os.walk to find all .dbf files and then the for loop using the dbfpy module to convert each .dbf file to .csv. It seems that you correctly find and read .dbf files, but do not convert them to .csv. I believe csv.writer code is a problem. I am not getting any errors, but the files remain as .dbf.

My code below is based on the code found here .

 import csv from dbfpy import dbf import os path = r"\Documents\House\DBF" for dirpath, dirnames, filenames in os.walk(path): for filename in filenames: if filename.endswith('.DBF'): in_db = dbf.Dbf(os.path.join(dirpath, filename)) csv_fn = filename[:-4]+ ".csv" out_csv = csv.writer(open(csv_fn,'wb')) names = [] for field in in_db.header.fields: names.append(field.name) out_csv.writerow(names) for rec in in_db: out_csv.writerow(rec.fieldData) in_db.close() 
+4
source share
1 answer

The source file you have will remain as dbf. You are not actually replacing it, but instead creating a new csv file. I think the problem is that writing to disk never happens. I suspect csv-writer is not flushing the file buffer.

Another problem that I see is that out_csv is created conditionally, so if you have another file in this directory with a different extension, you will run into problems.

Try using the context manager:

 for dirpath, dirnames, filenames in os.walk(path): for filename in filenames: if filename.endswith('.DBF'): csv_fn = filename[:-4]+ ".csv" with open(csv_fn,'wb') as csvfile: in_db = dbf.Dbf(os.path.join(dirpath, filename)) out_csv = csv.writer(csvfile) names = [] for field in in_db.header.fields: names.append(field.name) out_csv.writerow(names) for rec in in_db: out_csv.writerow(rec.fieldData) in_db.close() 

The 'with' statement (context manager) will close the file and clear the buffer at the end, without explicitly requiring it.

+4
source

Source: https://habr.com/ru/post/1498883/


All Articles