Does transaction block block performance in SQL Server?

Now my colleague and I are arguing about the effects of the non-viable BEGIN TRAN .... COMMIT TRAN blocks. I wrote about 140 stored procedures for simple insert-update-delete operations, and since later we may need to do some additional operations, I already included the necessary BEGIN TRAN and COMMIT TRAN blocks:

CREATE PROCEDURE [Users].[Login_Insert] @Username nvarchar (50) OUTPUT, @Password char (40), @FullName nvarchar (150), @LoginTypeId int AS SET NOCOUNT ON; BEGIN TRY BEGIN TRAN INSERT [Users].[Login] ( [Username], [Password], [FullName], [LoginTypeId] ) VALUES ( @Username, @Password, @FullName, @LoginTypeId ) COMMIT TRAN RETURN 1 END TRY BEGIN CATCH ROLLBACK TRAN RETURN -1 END CATCH GO 

Now many of these transactions will never be needed. Are these extraneous blocks affecting performance noticeably? Thanks in advance.

+6
source share
2 answers

Not enough to notice.

That is, each TXN will be open for an additional OhNoSecond between the BEGIN TRAN and INSERT. I would be impressed if anyone could measure it.

However, if you made a BEGIN TRAN, then you were prompted to introduce a user, your legs need to be broken ...

Good idea: I do this, so all my notes are 100% written, have the same error handling, can be nested, etc.

Edit: after Remus's answer, I see that I did not refer to the template for my TXN socket: Nested stored procedures containing the TRY CATCH ROLLBACK template? This differs from Remus in that it always rolls back and does not have SAVEPOINT

Edit, quick and dirty test shows that this happens faster than 2/3 of the time with a transaction

 SET NOCOUNT ON SET STATISTICS IO OFF DECLARE @date DATETIME2 DECLARE @noTran INT DECLARE @withTran INT SET @noTran = 0 SET @withTran = 0 DECLARE @t TABLE (ColA INT) INSERT @t VALUES (1) DECLARE @count INT, @value INT SET @count = 1 WHILE @count < 100 BEGIN SET @date = GETDATE() UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1 SET @noTran = @noTran + DATEDIFF(MICROSECOND, @date, GETDATE()) SET @date = GETDATE() BEGIN TRAN UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1 COMMIT TRAN SET @withTran = @withTran + DATEDIFF(MICROSECOND, @date, GETDATE()) SET @count = @count + 1 END SELECT @noTran / 1000000. AS Seconds_NoTransaction, @withTran / 1000000. AS Seconds_WithTransaction Seconds_NoTransaction Seconds_WithTransaction 2.63200000 2.70400000 2.16700000 2.12300000 

Reversing the update order retains the same behavior.

+8
source

The code you posted will not have any measurable effect, but transactions do affect performance, they can significantly improve performance due to grouping of flash key fixes, or they can significantly reduce performance due to improperly managed conflict problems. But the bottom line is that when transactions are necessary for correctness, you cannot skip them . That being said, your template is actually pretty bad with regards to transactions and try-catch blocks. A transaction in a catch block must have a state check of three states for XACT_STATE return values ​​(-1, 0, 1) and correctly process doomed transactions. See Exception Handling and Nested Transactions for an example.

you should never mix try-catch error handling with return code error handling. Choose one and stick to it, preferably try. In other words, your stored procedure should RAISE , not return -1. The exception of mixing with error codes makes your code a nightmare for maintaining and calling correctly.

+5
source

Source: https://habr.com/ru/post/890432/


All Articles