Setting a primary key in a very large database

I have a very large table (about 13 million rows) and I want to set the primary key for the tables. The problem is that, given the large database size, the computer crashed while trying to set the primary key for the table.

I believe that SQL Server is trying to install this key on every row found, so the memory consumption reaches the limit of my computer (6 GB of memory). Is there an effective and alternative way to set the primary key without causing these problems?

+6
source share
3 answers

Here are a few options that might work:

  • Create a new table with the same columns and primary key and select it from another table.
  • Create a script change and disable runtime in SSMS

Here's a script change pointing you in the right direction:

ALTER TABLE tableName WITH NOCHECK ADD CONSTRAINT PK_tableName PRIMARY KEY CLUSTERED (columnName) WITH (FILLFACTOR = 75, ONLINE = ON, PAD_INDEX = ON) 
+6
source

Another option is to create a new table with the corresponding primary key. Script to display any relationship except the basic structure of the table in the table. Set the database to single-user mode (so that no one can modify the records during the transfer process) after you have the current backup.

Then insert the records from the old table into the baths, say 10,000 at a time. This will take longer than a single set-based insert, but a timeout will be much less likely. You may need to experiment to find the optimal batch size. Once you're done, drop the old table, rename the new table, and set up any pk / fk relationship. Then exit single-user mode.

This is a task that should be performed only during production after hours and as a task of servicing the database, no one else should have access to the database during its occurrence.

Please note that you may not have unique information about the natural key of the table. In this case, you may have a problem if you use it as pk and you may need to move some entries to the exception table to fix it. Even if you use the surogate key as a new PC, I highly recommend a unique index on the natural key of the table, if at all possible.

0
source

One of the reasons that I saw for this is that the amount of memory that SQL Server can allocate is much more than what your machine wants to give, and so you ended up dumping it. Try changing the memory allocated by SQL Server to smaller memory fader printing and see if this is the reason. SQL Server should be able to handle many records, but if it tries to load them all into memory at the same time, to complete a task that might cause a problem.

0
source

Source: https://habr.com/ru/post/901558/


All Articles