How to capture data transferred to SqlBulkCopy using Sql Profiler?

I use Sql Profiler all the time to capture SQL queries and restart the problematic ones. Very useful.

However, some code uses the SqlBulkCopy API, and I don’t know how to catch them. I see the creation of temporary tables, but nothing fills them. It looks like SqlBulkCopy bypasses Sql Profiler or I am not capturing the correct events.

Any help is appreciated.

+5
source share
2 answers

Capturing event information for bulk insert operations ( BCP.EXE , SqlBulkCopy , and I assume that BULK INSERT and OPENROWSET(BULK... ) are possible, but you will not be able to see individual rows and columns.

Bulk upload operations are displayed as one (well, one per batch, and by default, all rows in one batch). DML statement:

 INSERT BULK <destination_table_name> ( <column1_name> <column1_datatype> [ COLLATE <column1_collation> ], ... ) [ WITH (<1 or more hints>) ] <hints> := KEEP_NULLS, TABLOCK, ORDER(...), ROWS_PER_BATCH=, etc 

You can find a complete list of "tips" on the MSDN page for BCP Utility . Please note that SqlBulkCopy only supports a subset of these tips (e.g. KEEP_NULLS , TABLOCK and several others), but it does not support ORDER(...) or ROWS_PER_BATCH= ** (which is quite true, since the ORDER() hint is necessary, to avoid the sorting that occurs in tempdb, to ensure minimal registration of the operation (provided that other conditions for such an operation are also satisfied).

To see this statement, you need to commit any of the following events in the SQL Server profiler:

SQL: BatchStarting
SQL: BatchCompleted
SQL: StmtStarting
SQL: StmtCompleted

You will also want to select at least the following columns (in SQL Server Profiler):

Textdata
CPU
Is reading
Is writing
Duration
COI
start time
Endtime
Rowcounts

And, since the user cannot directly send the INSERT BULK , you can filter it in Column Filters if you just want to see these events and nothing more.

If you want to see an official notification about the start and / or end of a BULK INSERT operation, you need to record the following event:

SqlTransaction

and then add the following Profiler columns:

EventSubClass
Objectname

For ObjectName you will always receive events showing "BULK INSERT", and what starts or ends is determined by the value in the EventSubClass , which is either "0 - Begin" or "1 - Commit" (and I suppose if it fails, you should see "2 - Rollback").

If the ORDER() hint is not specified (and, again, it cannot be specified when using SqlBulkCopy ), you will also receive an "SQLTransaction" event showing "sort_init" in the ObjectName column. This event also has the events "0 - Begin" and "1 - Commit" (as shown in the EventSubClass column).

Finally, even if you do not see specific rows, you can see transactions with the transaction log (for example, insert a row, change the IAM line, change the PFS line, etc.) if you commit the following event:

TRANSACTIONLOG

and add the following Profiler column:

Objectid

The main information of interest will be in the EventSubClass column, but, unfortunately, these are just identification values, and I could not find a translation of these values ​​in the MSDN documentation. However, I found the following blog post by Jonathan Kehayyas: Using advanced events in SQL Server Denali CTP1 to map TransactionLog SQL Trace EventSubClass values .

@RBarryYoung indicated that the values ​​and names of EventSubClass can be found in the sys.trace_subclass_values directory sys.trace_subclass_values , but querying this view shows that it does not have rows for the TransactionLog event:

 SELECT * FROM sys.trace_categories -- 12 = Transactions SELECT * FROM sys.trace_events WHERE category_id = 12 -- 54 = TransactionLog SELECT * FROM sys.trace_subclass_values WHERE trace_event_id = 54 -- nothing :( 

** Note that the SqlBulkCopy.BatchSize property SqlBulkCopy.BatchSize equivalent to setting the -b parameter to BCP.EXE , which is an operational parameter that controls how each command breaks the lines into sets. This is not the same as the ROWS_PER_BATCH= hint, which does not physically control how rows are divided into groups, but instead allows SQL Server to better plan how it will distribute pages, and therefore reduces the number of entries in the transaction log (sometimes quite a lot of). However, my testing showed that:

  • Specifying -b for BCP.EXE set the prompt ROWS_PER_BATCH= to the same value.
  • specifying the SqlBulkCopy.BatchSize property SqlBulkCopy.BatchSize not set the prompt ROWS_PER_BATCH= , BUT, the advantage of the reduced activity of the transaction log was somehow definitely defined there (magic?). The fact that the net effect is to benefit is why I didn't mention it at the top when I said it was a pity that the ORDER() hint was not supported by SqlBulkCopy .
+6
source

You cannot capture SqlBulkCopy in SQL Profiler because SqlBulkCopy does not generate SQL at all when inserting data into a SQL Server table. SqlBulkCopy works similarly to the bcp utility and loads data directly into the SQL Server file system. It can even ignore FK and triggers when inserting rows!

0
source

Source: https://habr.com/ru/post/1242232/


All Articles