How to efficiently archive old parts of a large (multi-GB) SQL Server database?

Now I am working on a solution for archiving old data from a large working database into a separate archive database with the same scheme. I am moving data using SQL scripts and SQL Server Management Objects (SMOs) from a .Net executable written in C #.

Archived data should still be accessible and even (sometimes) replaceable, we just want it to not support fast and fast work of the working base.

Tricking large parts of the data around and managing relationships between tables has proven to be quite a challenge.

I wonder if there is a better way to archive data with SQL Server.

Any ideas?

+4
source share
3 answers

I think if you still want / need the data to be available, then some of your largest or most frequently used tables could be split.

+1
source

Yes, use partitioning tables and indexes with filegroups.

You do not even need to change the selection instructions, only if you want to get the last bit of speed from the result.

Another option would be balancing the workload with two servers and two-way replication between them.

+1
source

We are in a similar situation. For regulatory reasons, we cannot delete data for a certain period of time, but many of our tables become very large and bulky and realistically most of the data that is older than one month can be deleted with little daily problems.

Currently, we programmatically crop tables using a custom .NET / shell combination application, using BCP to back up files that can be archived and left on the network side. This is not particularly affordable, but it is more economical. (This is complicated by the fact that we need to keep certain historical dates, and not be able to truncate at a certain size or with key fields in certain ranges.)

We are exploring alternatives, but, in my opinion, it is surprising that there are not many best practices in this discussion!

0
source

Source: https://habr.com/ru/post/1276465/


All Articles