Overview
This question is a more specific version of this:
But I noticed the same thing as for other data types (and, in fact, in my case, I do not use bigint
types at bigint
).
Here are some more questions that seem to cover the answer to this question, but I observe the opposite of what they indicate:
Context
I have C # code to insert data into a table. The code itself is data driven, as some other data indicates the target table into which the data should be inserted. That way, I could use dynamic SQL in a stored procedure, I decided to create dynamic SQL in my C # application.
The text of the command is always the same for line I, so I generate it once before inserting the lines. The text of the command has the form:
INSERT SomeSchema.TargetTable ( Column1, Column2, Column3, ... ) VALUES ( SomeConstant, @p0, @p1, ... );
For each insertion, I create an array of SqlParameter
objects.
For the " nvarchar
" behavior, I simply use the SqlParameter(string parameterName, object value)
constructor and do not set any other properties explicitly.
For the "degenerative" behavior, I used the SqlParameter(string parameterName, SqlDbType dbType)
constructor SqlParameter(string parameterName, SqlDbType dbType)
, and also set the Size
, Precision
and Scale
properties.
For both versions of the code, the value passed to the constructor method or separately assigned to the Value
property is of type object
.
The < nvarchar
'code version takes about 1-1.5 minutes. The code "degenerate" or "specific type" takes more than 9 minutes; therefore, 6-9 times slower.
SQL Server Profiler does not detect obvious criminals. The coding type of code generates what seems to be the best SQL, that is, a dynamic SQL command whose parameters contain the corresponding data type and type information.
Hypotheses
I suspect that since I pass the value of type object
as the parameter value, the ADO.NET SQL Server client code launches, converts or otherwise checks the value before generating and sending the command to SQL Server.I am surprised that the conversion from nvarchar
to each of the corresponding column types of the target table that SQL Server should execute is much faster than what the client code does.
Notes
I know that SqlBulkCopy
is probably the most efficient option for inserting a large number of lines, but I am more curious why the nvarchar
case executes a type-specific case, and my current code is fast enough, because the amount of data that it usually handles.