Why is SQL Server considered a bad practice for using cursors?

I knew some performance reasons back in SQL 7 days, but still the same problems exist in SQL Server 2005? If I have a result set in a stored procedure in which I want to act individually, are cursors still a bad choice? If so, why?

+49
sql-server sql-server-2005 cursor
Sep 12 '08 at 1:52
source share
11 answers

Because cursors occupy memory and create locks.

What you are actually doing is an attempt to get the technology with many reasons to use non-standard functionality. And to be honest, I have to point out that cursors do use, but they are not approved, because many people who are not used to using solution-based sets use cursors instead of understanding a set-based solution.

But when you open the cursor, you basically load these lines into memory and lock them, creating potential blocks. Then, when you move the cursor, you make changes to other tables and keep all memory and cursor locks open.

All of this can cause performance problems for other users.

So, as a rule, cursors are not approved. Especially if the first solution came to a solution to the problem.

+88
Sep 12 '08 at 2:00
source share

The above comments that SQL is a set-based environment are true. However, there are times when operations in turn are useful. Consider a combination of metadata and dynamic-sql.

As a very simple example, let's say I have 100+ records in a table that define the names of the tables that I want to copy / truncate / everything. What's better? Hard code SQL to do what I need? Or repeat this result set and use dynamic-SQL (sp_executesql) to perform the operations?

It is not possible to achieve the above goal using SQL-based.

So, to use cursors or a while loop (pseudo-cursors)?

SQL cursors are great if you use the right parameters:

INSENSITIVE will make a temporary copy of your result set (eliminating the need to do this yourself for your pseudo-cursor).

READ_ONLY will ensure that locks are not in the base result set. Changes in the base result set will be reflected in subsequent selections (just as if you had TOP 1 from your pseudo-cursor).

FAST_FORWARD will create a read-only, read-only cursor.

Read about the options available before manipulating all cursors as evil.

+18
Jun 11 '10 at 15:53
source share

There is work around cursors that I use every time I need it.

I am creating a table variable with an identity column in it.

insert all the data I need to work with.

Then make a while block with the counter variable and select the data I want from the table variable, with the select statement, where the identity column corresponds to the counter.

Thus, I do not block anything and use much less memory and its security, I will not lose anything with memory corruption or something like that.

And block code is easy to see and handle.

This is a simple example:

DECLARE @TAB TABLE(ID INT IDENTITY, COLUMN1 VARCHAR(10), COLUMN2 VARCHAR(10)) DECLARE @COUNT INT, @MAX INT, @CONCAT VARCHAR(MAX), @COLUMN1 VARCHAR(10), @COLUMN2 VARCHAR(10) SET @COUNT = 1 INSERT INTO @TAB VALUES('TE1S', 'TE21') INSERT INTO @TAB VALUES('TE1S', 'TE22') INSERT INTO @TAB VALUES('TE1S', 'TE23') INSERT INTO @TAB VALUES('TE1S', 'TE24') INSERT INTO @TAB VALUES('TE1S', 'TE25') SELECT @MAX = @@IDENTITY WHILE @COUNT <= @MAX BEGIN SELECT @COLUMN1 = COLUMN1, @COLUMN2 = COLUMN2 FROM @TAB WHERE ID = @COUNT IF @CONCAT IS NULL BEGIN SET @CONCAT = '' END ELSE BEGIN SET @CONCAT = @CONCAT + ',' END SET @CONCAT = @CONCAT + @COLUMN1 + @COLUMN2 SET @COUNT = @COUNT + 1 END SELECT @CONCAT 
+10
Jul 20 2018-11-21T00:
source share

I think cursors get a bad name because SQL newbies discover them and think, "Hey, for loop! I know how to use them!" and then they continue to use them for everything.

If you use them for what they are intended for, I cannot fault this.

+9
Sep 12 '08 at 1:57
source share

SQL is a set-based language - what it does best.

I think cursors are still a poor choice if you don't understand enough to justify their use in limited circumstances.

Another reason I don't like cursors is clarity. The cursor block is so ugly that it's hard to use in a clear and efficient way.

All that has been said, there are some cases where the cursor is really better - they are usually not cases that beginners want to use for them.

+8
Sep 12 '08 at 1:59
source share

Sometimes the nature of the processing that you need to perform requires cursors, although for performance reasons it is always better to write operations (s), using collection-based logic if possible.

I would not call it "bad practice" to use cursors, but they consume more resources on the server (than the equivalent set-based approach), and most often they are not needed. Given this, my advice would be to consider other options before resorting to the cursor.

There are several types of cursors (only for direct, static, keyboard, dynamic). Each of them has different performance characteristics and the associated overhead. Make sure the correct cursor type is used for your operation. By default, only call forwarding is used.

One argument for using a cursor is when you need to process and update individual rows, especially for a dataset that does not have a good unique key. In this case, you can use the FOR UPDATE clause when declaring the cursor and process updates using UPDATE ... WHERE CURRENT OF.

Note that server-side cursors were popular (from ODBC and OLE DB), but ADO.NET does not support them, and AFAIK will never be.

+4
Sep 12 '08 at 2:04
source share

There are very, very few cases where using the cursor is warranted. There are virtually no cases where it will outperform a relational, set-based query. Sometimes it is easier for the programmer to think in terms of loops, but using the given logic, for example, to update a large number of rows in a table, will lead to a solution that is not only much smaller than the lines of SQL code, but it works much faster, often several orders of magnitude faster.

Even a fast forward pointer in Sql Server 2005 cannot compete with dial-based queries. A performance degradation graph often begins to look like an n ^ 2 operation compared to a set-based operation, which tends to be more linear as the data set becomes very large.

+4
Sep 12 '08 at 2:08
source share

@Daniel P → you don’t need to use the cursor for this. You can easily use set-based theory to do this. For example: with Sql 2008

 DECLARE @commandname NVARCHAR(1000) = ''; SELECT @commandname += 'truncate table ' + tablename + '; '; FROM tableNames; EXEC sp_executesql @commandname; 

will just do what you said above. And you can do the same with Sql 2000, but the query syntax will be different.

However, my advice is to avoid cursors as much as possible.

Gayam

+3
May 19 '11 at 1:18
source share

Cursors do have their place, but I think this happens mainly because they are often used when a single select statement is enough to aggregate and filter the results.

Avoiding cursors allows SQL Server to more accurately optimize query performance, which is very important on larger systems.

+2
Sep 12 '08 at 2:01
source share

Cursors are usually not a disease, but a symptom of this: do not use a set-based approach (as mentioned in other answers).

Not understanding this problem and simply believing that avoiding the "evil" cursor will solve it can get worse.

For example, replacing a cursor iteration with another iterative code, such as moving data to temporary tables or table variables, iterate over rows in the following way:

 SELECT * FROM @temptable WHERE Id=@counter 

or

 SELECT TOP 1 * FROM @temptable WHERE Id>@lastId 

This approach, as shown in the code for another answer, makes things much worse and does not fix the original problem. This is an anti-pattern called cult worship programming: not knowing WHY something is bad and thus introducing something worse to avoid it! I recently changed this code (using #temptable and no index on identity / PK) back to the cursor, and updating just over 10,000 lines took only 1 second, and not almost 3 minutes. Still lacking a set-based approach (being a lesser evil), but best of all, I could make this point.

Another sign of this lack of understanding may be what I sometimes call the “single object disease”: database applications that process individual objects through data access levels or object-relational maps. Usually a code like:

 var items = new List<Item>(); foreach(int oneId in itemIds) { items.Add(dataAccess.GetItemById(oneId); } 

instead

 var items = dataAccess.GetItemsByIds(itemIds); 

The first, as a rule, floods the database with a multitude of SELECT, one round-the-world route for each, especially when trees / diagrams of objects come into play, and the notorious SELECT N + 1 problem.

This is the side of the application that does not understand relational databases and establishes an approach based on them, just like cursors are used when using database procedural code such as T-SQL or PL / SQL!

+2
Jun 21 '15 at 12:54
source share

The main problem, I think, is that databases are designed and configured for set-based operations — they select, update and delete large amounts of data in one quick step based on relationships in the data.

Software with internal memory, on the other hand, is designed for individual operations, so cyclic movement through a data set and the ability to perform various operations on each item in turn is what is best.

Looping is not what the database or storage architecture is for, and even in SQL Server 2005 you won’t get performance anywhere you go if you pull the underlying data into an arbitrary program and loop in memory using data objects / structures that are as light as possible.

+1
Sep 12 '08 at 1:59
source share



All Articles