Cascading performance degradation for large datasets could be caused by a lack of indexing?

I am writing code that should cascade deletion records in a specific database, and I noticed a decrease in performance because there are more records in the database. When I just populate the database, it seems that there is not a big performance hit between the start of the popup and the right at the end, but when I do the cascading delete, the performance decreases with the larger database. I assume that the cascade will require many joins to find all related records in other tables, which will slow down work on large data sets. But when I just add a record, is it not necessary to test existing primary keys and other unique restrictions, and will it also be slower in large data sets,or is it so incredibly fast compared to the removal process that it is difficult to notice a drop in performance when filling the database? Or are cascades simply slow because I didn’t specifically index the tables to which it cascades?

So, secondly, indexing the tables it cascades to speed up cascading if these tables already have the generated identifier as the primary key? In a more general sense: are primary keys automatically indexed?

+3
source share
1 answer

I assume that the cascade will require many joins to find all related records in other tables, which will slow down work on large data sets.

. Hibernate ( , org.hibernate.SQL), , SQL Hibernate . , .

: ?

.

+3

Source: https://habr.com/ru/post/1772487/


All Articles