I am trying to load several million rows of data into a table (the "follow" table, which contains two foreign keys for the user table and the associated indices of these keys) in one transaction. My initial attempt caused the script to crash because the system memory was exhausted.
In some studies, it was concluded that the accident was caused by foreign key constraints, so I checked that the table is empty (that is, the transaction that led to the process being killed did not end) and changed my script to remove the constraints and foreign key indices to insert data. My intention was to subsequently recreate the constraints and indexes.
However, the ALTER TABLE DROP CONSTRAINT command to reset the first foreign key constraint in a table takes a very long time (tens of minutes), despite the fact that the table is completely empty.
The only thing I can think of is that it is associated with a large amount of data that I wrote to the table and then did not commit, because the script failed. But, of course, since the transaction was not completed, I cannot find any trace of this data in the database.
What can lead to the fact that this request will be slow (or perhaps not launched at all, at the time of writing this article is still ongoing) and how to avoid it?
Other transactions open in the database (transactions transfer other very large tables within a few hours), but none of these transactions relate to the following table.
Edit: pg locks look like this:
db=
The pid above (17300) is just an ALTER TABLE request. There are no other locks or processes waiting for locks.
source share