I have several tables with lots of rows, some of which are close to a million. There are background tasks that allow you to access some recent entries in these tables. Due to the ever-increasing size, tasks continue to take longer. In addition, when displaying data on the front side, calls to the server also take a lot of time.
Therefore, I thought it would be better to create a replica of such tables (such as an archive) and save the data in these "archive" tables (for future use, if any). The idea is that whenever a record is fully processed, it will be deleted from the “live” tables and stored in the “archive” tables.
The PHP clone does not work, because it creates an entity in the same way as the original. One of the specific ways is to follow the same steps to create an entity and always continue to change.
Is there a better way to do this?
source
share