Zero deadlocks are basically an extremely expensive problem in the general case, because you need to know all the / obj tables that you are going to read and modify for each transaction you are executing (including SELECT). The general philosophy is called ordered strict two-phase locking (not to be confused with two-phase locking) ( http://en.wikipedia.org/wiki/Two_phase_locking ; even 2PL does not guarantee any deadlocks)
Very few DBMSs actually implement strict 2PL due to the enormous performance caused by such a thing (no free lunches), while all your transactions are waiting for even simple SELECT statements to be executed.
Anyway, if that really interests you, take a look at SET ISOLATION LEVEL in SQL Server. You can customize this as needed. http://en.wikipedia.org/wiki/Isolation_level
For more information see wikipedia on Serializability: http://en.wikipedia.org/wiki/Serializability
However, a large analogy is similar to source code fixes: check first and often. Keep transactions small (in # of SQL statements, number of rows changed) and fast (wall clock time helps avoid collisions with others). It may be nice and accurate to do many things in one transaction - and in general I agree with this philosophy - but if you encounter many deadlocks, you can break the trance into smaller ones, and then check their status in the application as you move. TRAN 1 - OK Y / N? If Y, send TRAN 2 - OK Y / N? etc.
As in the past, over the many years of the existence of the DBA, as well as the developer (multi-user database applications measuring thousands of concurrent users), I never considered deadlocks such a massive problem that I needed to know it (or change isolation levels willy-nilly and etc.).
Matt Rogish Sep 22 '08 at 2:34 2008-09-22 02:34
source share