There are many bizarre ways for this error to occur. I will try to list one or two, and perhaps the analogy is true for those who read this at some point.
In large datasets, even when innodb_buffer_pool_size changed to a larger value, you can push this error if the corresponding index is not set to isolate the rows in the where clause. Or in some cases with a main index (see this ) and a comment by Roger Gamman:
From (5.0 documentation for innodb): -
If you do not have indexes suitable for your statement, and MySQL must scan the entire table to process the instruction, each row of the table is blocked, which, in turn, blocks all insertions by other users into the Table. It’s important to create good indexes so that your queries do not overly scan many rows.
Visually, how this error can occur and is difficult to solve with this simple scheme:
CREATE TABLE `students` ( `id` int(11) NOT NULL AUTO_INCREMENT, `thing` int(11) NOT NULL, `campusId` int(11) DEFAULT NULL, PRIMARY KEY (`id`), KEY `ix_stu_cam` (`camId`) ) ENGINE=InnoDB;
A table with 50 million rows. FK not shown, not a problem. This table is initially not important for showing query performance. However, when initializing thing = id in blocks of 1M rows, I had to fulfill the restriction during the block update to prevent other problems using:
update students set thing=id where thing!=id order by id desc limit 1000000 ;
Everything was fine until he said that 600,000 remained to be updated, as seen from
select count(*) from students where thing!=id;
Why did I do this count(*) caused by repetition
Error 1206: the total number of locks exceeds the size of the lock table
I could have omitted my LIMIT shown in the update above, but in the end I was left with, say, 1200 != In the bill, and the problem just continued.
Why did this go on? Because the system populated the lock table when it checked this large table. Undoubtedly, this may mean that the “internal implicit transaction” changed these last 1200 lines to equal, but due to filling in the lock table, it actually interrupted the transaction with nothing set. And the process will depend.
Figure 2:
In this example, let's say that I have 288 rows of a table of 50 million rows, which can be updated above. Due to the described problem with the final game, I would often find a problem with this request twice :
update students set thing=id where thing!=id order by id desc limit 200 ;
But I would not have problems with them:
update students set thing=id where thing!=id order by id desc limit 200; update students set thing=id where thing!=id order by id desc limit 88 ;
Decision
There are many ways to solve this problem, including but not limited to:
but. Another index was added in a column offering data, possibly boolean . And including this in the where clause. However, on huge tables, the question may arise of creating several temporary indexes.
B. Another solution would be to populate a second table that has not yet cleared id's . Combined with update with join template .
C. Dynamically changing the LIMIT value so as not to overflow the lock table. An overflow can occur when there simply are no more rows for UPDATE or DELETE (your operation), the LIMIT has not been reached, and the lock table is populated with fruitless scans for more that simply does not exist (see Illustration2 above).
The main point of this answer is to make it clear why this is happening. And for any reader, to create a solution for the final game that meets their needs (compared to the times of useless changes to system variables, reboots and prayers).