Zero SQL design deadlock - any coding patterns?

I encounter very rare, but annoying SQL locks on the .NET 2.0 web server running on top of MS SQL Server 2005. Previously, we dealt with SQL deadlocks in the most empirical way - basically, we tuned the queries as long as it worked .

However, I found this approach very unsatisfactory: time-consuming and unreliable. I would prefer to follow deterministic query patterns that guarantee by design that an SQL deadlock will never occur - ever.

For example, in multi-threaded programming in C #, a simple design rule is needed, such as locks, following their lexicographical order, ensuring that no deadlock occurs.

Are there any SQL coding patterns guaranteed to be dead end?

+30
sql design-patterns sql-server deadlock
Sep 21 '08 at 18:51
source share
10 answers

Writing code with a lock lock is very difficult. Even when you access tables in the same order, you can still get locks [1]. I wrote a post on my blog where you will find several approaches that will help you avoid and resolve deadlocks.

If you want two statements / transactions to never get stuck, you can achieve this by observing which locks each statement consumes using the sp_lock stored procedure. To do this, you need either very fast or use an open transaction with a holdlock hint.




Notes:

  • Any SELECT statement that requires more than one lock at once can be deadlocked against a smartly designed transaction that locks in the reverse order.
+20
Sep 21 '08 at 21:41
source share

Zero deadlocks are basically an extremely expensive problem in the general case, because you need to know all the / obj tables that you are going to read and modify for each transaction you are executing (including SELECT). The general philosophy is called ordered strict two-phase locking (not to be confused with two-phase locking) ( http://en.wikipedia.org/wiki/Two_phase_locking ; even 2PL does not guarantee any deadlocks)

Very few DBMSs actually implement strict 2PL due to the enormous performance caused by such a thing (no free lunches), while all your transactions are waiting for even simple SELECT statements to be executed.

Anyway, if that really interests you, take a look at SET ISOLATION LEVEL in SQL Server. You can customize this as needed. http://en.wikipedia.org/wiki/Isolation_level

For more information see wikipedia on Serializability: http://en.wikipedia.org/wiki/Serializability

However, a large analogy is similar to source code fixes: check first and often. Keep transactions small (in # of SQL statements, number of rows changed) and fast (wall clock time helps avoid collisions with others). It may be nice and accurate to do many things in one transaction - and in general I agree with this philosophy - but if you encounter many deadlocks, you can break the trance into smaller ones, and then check their status in the application as you move. TRAN 1 - OK Y / N? If Y, send TRAN 2 - OK Y / N? etc.

As in the past, over the many years of the existence of the DBA, as well as the developer (multi-user database applications measuring thousands of concurrent users), I never considered deadlocks such a massive problem that I needed to know it (or change isolation levels willy-nilly and etc.).

+13
Sep 22 '08 at 2:34
source share

There is no magical general solution to this problem that works in practice. You can click concurrency on an application, but it can be very difficult, especially if you need to coordinate work with other programs running in separate memory spaces.

General answers to reduce deadlock opportunities:

  • Optimization of the main queries (the correct use of the index) hotspot avoidanant design, conducting transactions as soon as possible ... etc.

  • Whenever possible, set reasonable request timeouts so that when a deadlock occurs, it is automatically cleared after the expiration of the wait period.

  • Dead ends in MSSQL often arise because of its default concurrency reading model, so it is very important not to depend on it - suppose the Oracle MVCC style in all projects. Use snapshot isolation or, if possible, READ UNCOMMATED isolation level.

+3
Feb 08 '09 at 5:36
source share

I believe the following useful read / write pattern is a deadlock, given some limitations:

Limitations:

  • Single table
  • For reading / writing, an index or PK is used, so the engine does not use table locks.
  • A batch of records can be read using the SQL where clause.
  • Using SQL Server terminology.

Recording Cycle:

  • All entries are performed in a single Read Committed transaction.
  • The first transaction update refers to a specific, always current record in each update group.
  • Multiple recordings can be recorded in any order. (They are "protected" by recording in the first record).

Reading cycle:

  • Default Transaction Read Level
  • No transaction
  • Reading records as a single select statement.

Benefits:

  • Secondary write cycles are blocked when the first record is recorded until the first write transaction completes completely.
  • Reads are blocked / queued / performed atomically between write transactions.
  • Achieve transaction level consistency without using "Serializable".

I need this to work this way, please comment / correct !!

+2
Nov 14 '11 at 19:57
source share

As you said, always accessing the tables in the same order is a very good way to avoid deadlocks. In addition, minimize your transactions.

Another cool trick is to combine 2 sql statements at a time when you can. Single statements are always transactional. For example, use "UPDATE ... SELECT" or "INSERT ... SELECT", use "@@ ERROR" and "@@ ROWCOUNT" instead of "SELECT COUNT" or "IF (EXISTS ...)"

Finally, make sure your calling code can handle locks by reconfiguring the request a custom number of times. Sometimes this happens, this is normal behavior, and your application must deal with it.

+1
Sep 21 '08 at 18:57
source share

If you have sufficient control over your application, limit your updates / inserts to specific stored procedures and remove update / insert rights from the database roles used by the application (only explicitly allow updates through these stored procedures).

Isolate database connections with a specific class in your application (each connection must be associated with this class) and indicate that query-only connections set the read-only isolation level ... the nolock on each connection.

This way you isolate actions that can cause locks (for specific stored procedures) and accept "simple reads" from the "lock cycle".

+1
Sep 22 '08 at 2:49
source share

In addition to the sequential sequence of locking the lock, another way explicitly uses binding and isolation to reduce time / resources that are wasted randomly, for example, when trying to aim while reading.

+1
Sep 22 '08 at 5:05
source share

Something that no one mentioned (which is surprising) is that when it comes to SQL Server, many locking problems can be fixed with the correct set of coverage indexes for the database workload. What for? Because it can significantly reduce the number of bookmark searches in a table with a clustered index (assuming it's not a bunch), thus reducing competition and blocking.

+1
Nov 03 '08 at 1:23
source share

The quick answer is no, there is no guaranteed technique.

I don’t see how you can make any evidence of the deadlock of the application as a whole as a design principle, if it has some non-trivial bandwidth. If you pre-block all the resources that might be needed in the process in the same order, even if you do not need them, you risk a more expensive problem when the second process expects to receive the first block that it needs and your availability affects. And as the number of resources in your system grows, even trivial processes must block them all in the same order to prevent deadlocks.

The best way to solve SQL deadlock problems, like most performance and availability issues, is to look at the workload in the profiler and understand the behavior.

0
Sep 22 '08 at 3:43
source share

Not a direct answer to your question, but food for thought:

http://en.wikipedia.org/wiki/Dining_philosophers_problem

The Dining Philosophers problem is an old thought experiment to study the problem of deadlocks. Reading about this can help you find a solution to your specific circumstances.

0
Sep 22 '08 at 5:55
source share



All Articles