I am writing a stored procedure to insert rows into a table. The problem is that in some kind of operation we might want to insert more than 1 million rows, and we want to do it quickly. Another thing is that in one of the columns is Nvarchar(MAX) . We could put 1000 characters in this avg column.
First, I wrote prc to insert line by line. Then I generate random data to insert with an Nvarchar(MAX) column as a 1000 character string. Then use a loop to call prc to insert rows. The performance is very bad, which takes 48 minutes if I use a SQL server to enter the database server for insertion. If I use C # to connect to the server on my desktop (this is what we usually want to do), it takes more than 90 minutes.
Then I changed prc to take the table type as an input parameter. I somehow prepared the rows and put them in a table type parameter and insert the following command:
INSERT INTO tableA SELECT * from @tableTypeParameterB
I tried batch size as 1000 rows and 3000 rows (put 1000-3000 rows in @tableTypeParameterB to be inserted once). Performance is still poor. It takes about 3 minutes to insert 1 million rows if I run it on a SQL server and take about 10 minutes if I use a C # program to connect from my desktop.
tableA has a clustered index with two columns.
My goal is to make the insert as fast as possible (the goal of my idea is within 1 minute). Is there any way to optimize it?
Just an update:
I tried the Bulk Copy option, which was suggested by some people below. I tried using SQLBULKCOPY to insert 1000 rows and 10000 rows at a time. Performance is still 10 minutes to insert 1 million rows (each row has a column with 1000 characters). There is no performance improvement. Are there any other suggestions?
Comment based update required.
The data actually comes from the user interface. The user will change the use of the user interface for bulk selection of, say, a million rows and change one column from the old value to the new value. This operation will be performed in a separate procedure. But here we need to do to make a mid-level service, to get the old value and the new value from the user interface and insert them into the table. The old value and the new value can be up to 4000 characters, and the average value is 1000 characters. I think that a long string old / new value slows down, because when I change the test value old value / new value by 20-50 characters and insert it very quickly, it doesn't matter, use SQLBulkCopy or table type variable