Migrating massive amounts of data in SQL

I need to transfer a huge amount of data (millions of files, terabytes of data) to an SQL cluster. The migration process is divided into weekly migrations, with several hundred new users each week (for example, query generators).

So far, everything has been working pretty smoothly, but lately, I found that the SQL cluster started acting rather strange. CPU usage is reliable and stable at around 20%, but the SQL process constantly allocates new memory until nothing remains (about 12 GB). When this happens, the process "flushes" all the memory and starts to go up 12 GB again. During this dump, it often happens that the server becomes unresponsive and, ultimately, a timeout, which simply cannot happen during this week of migration.

Is this allocate-and-dump behavior common to SQL clusters? Is it possible to create configurations for it so that this never happens, or at least does not damage the entire database? Does anyone have experience with large migration jobs?

While looking at event logs, I found some WMI warning preceding timout. We use System Center Operations Manager 2007 to ignore the system, can this explain this behavior?

Thanks for the help!

+3
source share
1 answer

No, this is not normal behavior.

SQL Server will dynamically allocate memory as needed and will automatically free memory when it is under pressure. However, it should not discard all content.

Can you provide more detailed information about your environment and determine what SQL memory configuration you have.

, , , DBCC.

DBCC memorystatus
+1

Source: https://habr.com/ru/post/1703286/


All Articles