Removing a large number of rows from a table

We have a requirement to delete rows in the order of millions from several tables as a batch job (note that we do not delete all rows, we delete them based on the timestamp stored in the indexed column). Obviously, regular DELETE takes forever (due to logging, link restriction checking, etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY, but I cannot find the equivalent SQL statement for DB2 v8 z / OS. Anyone have any ideas on how to do this very quickly? Also, any ideas on how to avoid referential checks when deleting rows? Please let me know.

+3
source share
3 answers

We have changed the table space so that locking is done at the table space level, not at the page level. As soon as we changed, DB2 only needed one lock for DELETE, and we had no problems with locking. Regarding logging, we simply asked the client to know the number of logs required (since there seemed to be no solution to get around the logging problem). As for the limitations, we just reset them and restore them after the removal.

Thank you all for your help.

0
source

In the past, I solved this problem by exporting the data and reloading it with the replace style command. For instance:

EXPORT to myfile.ixf OF ixf
SELECT * 
FROM my_table 
WHERE last_modified < CURRENT TIMESTAMP - 30 DAYS;

Then you can DOWNLOAD it back by replacing the old material.

LOAD FROM myfile.ixf OF ixf
REPLACE INTO my_table
NONRECOVERABLE INDEXING MODE INCREMENTAL;

, (, , , ).

+1

http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r

0
source

Source: https://habr.com/ru/post/1739861/


All Articles