"The total number of locks exceeds the size of the lock table" Deleting 267 records

I am trying to delete 267 entries from approximately 40 million. The request looks like this:

delete from pricedata where pricedate > '20120413' 

priceate is a char(8) field.

I know about changing innodb_buffer_pool_size , but if I can do

 select from pricedata where pricedate > '20120413' 

and get 267 records (and that everything is there), no errors, why does he suffocate when deleted?

And if the innodb_buffer_pool_size setting does not work, what should I do?

+6
mysql locking sql-delete innodb
Apr 20 2018-12-12T00:
source share
3 answers

What worked: changing innodb_buffer_pool_size to 256M (see comments in Quassnoi original comment).

+2
Apr 21 2018-12-21T00:
source share
— -

It seems that you do not have an index on pricedate (or MySQL does not use this index for any reason).

With REPEATABLE READ (the default transaction isolation level), InnoDB puts shared locks on records read and filtered by request, and it seems like you don't have enough space for 40M locks.

To work around this problem, use any of these solutions:

  • Create an index on pricedate if it does not exist (may take some time)

  • Break the request into smaller pieces:

     DELETE FROM pricedata WHERE pricedate > '20120413' AND id BETWEEN 1 AND 1000000 DELETE FROM pricedata WHERE pricedate > '20120413' AND id BETWEEN 1000001 AND 2000000 

    etc .. (change the id ranges if necessary). Please note that each statement must be launched in its own transaction (remember to commit after each statement if AUTOCOMMIT turned off).

  • Run a DELETE query with transaction isolation level READ COMMITTED . It will do InnoDB locks from the records as soon as they are read. This will not work if you use the binary log in instruction mode and do not allow binging unsafe requests (this is the default value).

+7
Apr 20 2018-12-21T00:
source share

(Late answer, but alwayx is good if you have this problem on google)

A solution that does not modify the innodb_buffer_pool_size file or index creation may be limited by the number of rows to be deleted.

So, in your case DELETE from pricedata where pricedata > '20120413' limit 100; eg. This will remove 100 lines and leave 167 behind. Thus, you can run the same query again and delete another 100. For the last 67 it is difficult ... when the number of rows remaining in the database is less than the specified limit, you will again receive an error about the number of locks. Probably because the server will look for more suitable lines to fill up to 100. In this case, use limit 67 to delete the last part. (Of course, you could use limit 267 at the beginning)

And for those who like a script ... a good example that I used in a bash script to clear old data:

  # Count the number of rows left to be deleted QUERY="select count(*) from pricedata where pricedata > '20120413';" AMOUNT=`${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} | tail -1` ERROR=0 while [ ${AMOUNT} -gt 0 -a ${ERROR} -eq 0 ] do ${LOGGER} " ${AMOUNT} rows left to delete" if [ ${AMOUNT} -lt 1000 ] then LIMIT=${AMOUNT} else LIMIT=1000 fi QUERY="delete low_priority from pricedata where pricedata > '20120413' limit ${LIMIT};" ${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} STATUS=$? if [ ${STATUS} -ne 0 ] then ${LOGGER} "Cleanup failed for ${TABLE}" ERROR=1 fi QUERY="select count(*) from pricedata where pricedata > '20120413';" AMOUNT=`${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} | tail -1` done 
+4
May 20 '14 at 5:54
source share



All Articles