Error 1206 when trying to delete records from a table

I have a table with over 40 million records. I want to delete about 150,000 records using an SQL query:

DELETE FROM t WHERE date="2013-11-24" 

but I get error 1206 (the total number of locks exceeds the size of the lock table). I searched a lot and changed the size of the buffer pool:

 innodb_buffer_pool_size=3GB 

but it didn’t work. I also tried locking tables, but didn't work either:

 Lock Tables t write; DELETE FROM t WHERE date="2013-11-24"; unlock tables; 

I know that one solution is to separate the removal process, but I want this to be my last option. I am using mysql server, server OS is centos, and server Ram is 4 GB.

I would be grateful for any help.

+4
sql mysql
Nov 24 '13 at 10:48
source share
2 answers

You can use Limit when deleting and try to delete data in batches, for example, 10,000 records at a time:

 DELETE FROM t WHERE date="2013-11-24" LIMIT 10000 

You can also include the ORDER BY clause so that the rows are deleted in the order specified in the clause:

 DELETE FROM t WHERE date="2013-11-24" ORDER BY primary_key_column LIMIT 10000 
+3
Nov 24 '13 at 13:12
source share

There are many bizarre ways for this error to occur. I will try to list one or two, and perhaps the analogy is true for those who read this at some point.

In large datasets, even when innodb_buffer_pool_size changed to a larger value, you can push this error if the corresponding index is not set to isolate the rows in the where clause. Or in some cases with a main index (see this ) and a comment by Roger Gamman:

From (5.0 documentation for innodb): -

If you do not have indexes suitable for your statement, and MySQL must scan the entire table to process the instruction, each row of the table is blocked, which, in turn, blocks all insertions by other users into the Table. It’s important to create good indexes so that your queries do not overly scan many rows.

Visually, how this error can occur and is difficult to solve with this simple scheme:

 CREATE TABLE `students` ( `id` int(11) NOT NULL AUTO_INCREMENT, `thing` int(11) NOT NULL, `campusId` int(11) DEFAULT NULL, PRIMARY KEY (`id`), KEY `ix_stu_cam` (`camId`) ) ENGINE=InnoDB; 

A table with 50 million rows. FK not shown, not a problem. This table is initially not important for showing query performance. However, when initializing thing = id in blocks of 1M rows, I had to fulfill the restriction during the block update to prevent other problems using:

 update students set thing=id where thing!=id order by id desc limit 1000000 ; -- 1 Million 

Everything was fine until he said that 600,000 remained to be updated, as seen from

 select count(*) from students where thing!=id; 

Why did I do this count(*) caused by repetition

Error 1206: the total number of locks exceeds the size of the lock table

I could have omitted my LIMIT shown in the update above, but in the end I was left with, say, 1200 != In the bill, and the problem just continued.

Why did this go on? Because the system populated the lock table when it checked this large table. Undoubtedly, this may mean that the “internal implicit transaction” changed these last 1200 lines to equal, but due to filling in the lock table, it actually interrupted the transaction with nothing set. And the process will depend.

Figure 2:

In this example, let's say that I have 288 rows of a table of 50 million rows, which can be updated above. Due to the described problem with the final game, I would often find a problem with this request twice :

 update students set thing=id where thing!=id order by id desc limit 200 ; 

But I would not have problems with them:

 update students set thing=id where thing!=id order by id desc limit 200; update students set thing=id where thing!=id order by id desc limit 88 ; 

Decision

There are many ways to solve this problem, including but not limited to:

but. Another index was added in a column offering data, possibly boolean . And including this in the where clause. However, on huge tables, the question may arise of creating several temporary indexes.

B. Another solution would be to populate a second table that has not yet cleared id's . Combined with update with join template .

C. Dynamically changing the LIMIT value so as not to overflow the lock table. An overflow can occur when there simply are no more rows for UPDATE or DELETE (your operation), the LIMIT has not been reached, and the lock table is populated with fruitless scans for more that simply does not exist (see Illustration2 above).

The main point of this answer is to make it clear why this is happening. And for any reader, to create a solution for the final game that meets their needs (compared to the times of useless changes to system variables, reboots and prayers).

0
Jul 09 '16 at 13:23
source share



All Articles