I am running a bunch of queries using Python and psycopg2. I create one large temporary table with approximately 2 million rows, then I get 1000 rows at a time with help cur.fetchmany(1000)and run more extensive queries involving these rows. Extensive queries are self-contained, although when they are finished, I no longer need their results when I move on to the next 1000.
However, with about 1,000,000 lines, I got an exception from psycopg2:
psycopg2.OperationalError: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Oddly enough, this happened when I ran a query to delete some temporary tables that created more extensive queries.
Why could this happen? Is there any way to avoid this? It was annoying that this happened halfway, which means that I have to start all this again. What can max_locks_per_transactionbe related to everything?
NOTE. I do not make any .commit()s, but I delete all the temporary tables that I create, and I still touch the same 5 tables for each "extensive" transaction. See how the problem of starting table locks can occur ...
source
share