I managed to figure it out at the end. Therefore, if you have a Pandas Dataframe that you want to write to the database using ceODBC , which is the module I used, the code:
(with all_data as a dataframe) maps the dataframe values ββto a string and saves each row as a tuple in the tuple list
for r in all_data.columns.values: all_data[r] = all_data[r].map(str) all_data[r] = all_data[r].map(str.strip) tuples = [tuple(x) for x in all_data.values]
for a list of tuples, change all the null denominators that were written as strings in the conversion above to a null type that can be passed to the destination database. This was a problem for me, maybe not for you.
string_list = ['NaT', 'nan', 'NaN', 'None'] def remove_wrong_nulls(x): for r in range(len(x)): for i,e in enumerate(tuples): for j,k in enumerate(e): if k == x[r]: temp=list(tuples[i]) temp[j]=None tuples[i]=tuple(temp) remove_wrong_nulls(string_list)
create database connection
cnxn=ceODBC.connect('DRIVER={SOMEODBCDRIVER};DBCName=XXXXXXXXXXX;UID=XXXXXXX;PWD=XXXXXXX;QUIETMODE=YES;', autocommit=False) cursor = cnxn.cursor()
Define a function to turn the list of tuples into new_list , which is an additional indexing in the list of tuples, into pieces 1000. I needed to transfer data to a database whose SQL Query failed to exceed 1 MB.
def chunks(l, n): n = max(1, n) return [l[i:i + n] for i in range(0, len(l), n)] new_list = chunks(tuples, 1000)
Define your request.
query = """insert into XXXXXXXXXXXX("XXXXXXXXXX", "XXXXXXXXX", "XXXXXXXXXXX") values(?,?,?)"""
Run new_list containing a list of tuples in groups of 1000 and execute executemany . Follow this, committing and closing the connection, and what it is :)
for i in range(len(new_list)): cursor.executemany(query, new_list[i]) cnxn.commit() cnxn.close()