I have a very large table with 250,000 + rows, many of which contain a large text block in one of the columns. Right now it is 2.7 GB and it is expected to grow at least ten times. I need to perform python specific operations for each row of the table, but I need to have access to only one row at a time.
My code now looks something like this:
c.execute('SELECT * FROM big_table') table = c.fetchall() for row in table: do_stuff_with_row
This worked fine when the table was smaller, but the table is now larger than my available ram and python freezes when I try to start it. Is there a better (more efficient ram) way to iterate through the rows throughout the table?
source share