What is the danger of a BEGIN TRY DROP TABLE?

In a script used to interactively analyze subsets of data, it is often useful to store query results in temporary tables for further analysis.

Many of my analysis scripts contain this structure:

CREATE TABLE #Results ( a INT NOT NULL, b INT NOT NULL, c INT NOT NULL ); INSERT INTO #Results (a, b, c) SELECT a, b, c FROM ... SELECT * FROM #Results; 

In SQL Server, temporary tables are tied to joins, so the query results are saved after the initial query. When a subset of the data that I want to analyze is expensive to calculate, I use this method instead of using a table variable, because the subset is stored in different batches of queries.

Part of the setup script is run once, and the following queries ( SELECT * FROM #Results are the SELECT * FROM #Results here) are executed as often as possible.

Sometimes I want to update a subset of the data in a temporary table, so I run the whole script again. One way to do this is to create a new connection by copying the script into a new request window in Management Studio, I find it difficult to handle.

Instead, the usual workaround is to precede the create statement with the conditional drop statement as follows:

 IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL BEGIN DROP TABLE #Results; END; 

This statement correctly handles two situations:

  • On the first run, when the table does not exist: do nothing.
  • In subsequent launches, when the table exists: drop the table.

Production scripts written by me will always use this method because it does not cause errors in two expected situations.

Some equivalent scripts written by my fellow developers sometimes handle these two situations using exception handling:

 BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH 

I believe in the world of databases that it’s always better to ask permission than to seek forgiveness, so this method makes me embarrassed.

The second method swallows the error, without taking any action to handle non-exceptional behavior (the table does not exist). In addition, it is possible that the error will be raised for another reason, except that the table does not exist.

Wise Owl warns about this:

Of the two methods, the [ OBJECT_ID ] method is harder to understand, but probably better: using the [ BEGIN TRY ] method, you run the risk of catching the wrong error!

But this does not explain the practical risks.

In practice, the BEGIN TRY method has never caused a problem on the systems that I support, so I'm glad it stayed there.

What are the dangers for managing temporary table existence using the BEGIN TRY method? What unexpected errors can be hidden by an empty catch block?

+4
source share
4 answers

What possible dangers? What unexpected errors are likely to be concealed?

If the try catch block is inside the transaction, it will fail.

 BEGIN BEGIN TRANSACTION t1; SELECT 1 BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH COMMIT TRANSACTION t1; END 

This batch will end with an error similar to this:

Msg 3930, Level 16, State 1, Line 7 The current transaction cannot be committed and cannot support operations that are written to the log file. Cancel the transaction. Msg 3998, Level 16, State 1, Line 1 An undefined transaction was found at the end of the packet. The transaction is canceled.

Books Online documents this behavior:

Failed Transactions and XACT_STATE

If the error generated in the TRY block invalidates the state of the current transaction, the transaction is classified as an unmanaged transaction. The error that normally completes the transaction outside the TRY block causes the transaction to enter an uncomfortable state when the error occurs inside the TRY block. A failed transaction can only perform read operations or a ROLLBACK transaction. A transaction cannot execute any Transact-SQL statements that generate a write operation or COMMIT TRANSACTION.

now replace TRY / Catch with a test method

 BEGIN BEGIN TRANSACTION t1; SELECT 1 IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL BEGIN DROP TABLE #Results; END; COMMIT TRANSACTION t1; END 

and run again. The transaction will be completed without any errors.

+3
source

A better solution might be to use a table variable rather than a temporary table

t

 declare @results table( a INT NOT NULL, b INT NOT NULL, c INT NOT NULL ); 
+1
source

I also think that the try block is dangerous because it can hide an unexpected problem. Some programming languages ​​can catch only selected errors and not catch unexpected ones, if your programming language has this function, then use it (T-SQL can't catch a specific error)

In your scenario, I can explain that I am coding exactly the same as you with this try catch .

Desired Behavior:

 begin try drop table #my_temp_table end try begin catch __table_dont_exists_error__ end catch 

But this does not exist! Then you can write how some people think:

 begin try drop table #my_temp_table end try begin catch declare @err_n int, @err_d varchar(MAX) SELECT @err_n = ERROR_NUMBER() , @err_d = ERROR_MESSAGE() ; IF @err_n <> 3701 raiserror( @err_d, 16, 1 ) end catch 

This will cause an event when the error removal table is different from the fact that the table does not exist.

Please note that for your problem all this code is not worth it. But it may be useful for a different approach. For your problem, the elegant solution is to drop the table only if it exists or use a table variable .

0
source

Not in your question, but maybe the resources used by the temp table are not taken into account. I always throw the table at the end of the script, so it does not bind resources. What if you put a million rows in a table? Then I also test the table at the beginning of the script to handle the condition when an error occurred in the last run and the table was not dropped. If you want to reuse the pace, then at least clear the lines.

Another variable is a table variable. It is lighter in weight and has limitations. Avoid a table variable if you intend to use it in a query join, since the query optimizer does not process the table variable, just like the temporary one.

SQL documentation:

If multiple temporary tables are created in the same stored procedure or batch, they must have different names.

If a local temporary table is created in a stored procedure or application that can be executed simultaneously by several users, then the Database Engine should be able to distinguish between tables created by different users. The Database engine does this by internally adding a numeric suffix to each local temporary table name. The full name of the temporary table stored in the sysobjects table in tempdb is made up of the table name specified in the CREATE TABLE statement and the system numeric suffix. To resolve the suffix, the table_name specified for the local temporary name cannot exceed 116 characters.

Temporary tables are automatically deleted when they go out of scope unless explicitly dropped using DROP TABLE:

The local temporary table created in the stored procedure is automatically discarded after the stored procedure completes. A table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The process that called the stored procedure that created the table cannot reference the table.

All other local temporary tables are automatically discarded at the end of the current session.

Global temporary tables are automatically deleted when the session that created the table ends and all other tasks are no longer referenced. The relationship between the task and the table is maintained only for the life of a single Transact-SQL statement. This means that the global temporary table is discarded when the last Transact-SQL statement completes, which actively referenced the table at the end of the create session.

0
source

Source: https://habr.com/ru/post/1433815/


All Articles