The fastest way to delete a huge MySQL table

I have a huge MySQL database (InnoDB) with millions of rows in the session table that were created by an unrelated, faulty crawler running on the same server as ours. Unfortunately, now I have to fix the mess.

If I try truncate table sessions; It seems like it takes too much time (more than 30 minutes). I'm not interested in data; I just want the table to be destroyed as quickly as possible. Is there a faster way, or will I just have to stick with it for the night?

+44
mysql innodb
May 18, '09 at 19:22
source share
11 answers

The fastest way is to use DROP TABLE to completely delete the table and recreate it using the same definition. If you do not have foreign key restrictions in the table, you should do this.

If you use a version of MySQL greater than 5.0.3, this will happen automatically with TRUNCATE. You can also get useful information from the manual and also describe how TRUNCATE works with FK restrictions. http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html

EDIT: TRUNCATE is not the same as drop or DELETE FROM. For those who are confused by the differences, please see the above link to the manual. TRUNCATE will act like a drop if possible (if there is no FK), otherwise it acts like a DELETE FROM without a where clause.

+41
May 18 '09 at 19:28
source share

(Since this turned out to be high in Google results, I thought a little more instructions could be convenient.)

MySQL has a convenient way to create empty tables, such as existing tables, and atomic table rename commands. Together, this is a quick way to clear data:

 CREATE TABLE new_foo LIKE foo; RENAME TABLE foo TO old_foo, new_foo TO foo; DROP TABLE old_foo; 

Done

+95
Jun 14 '11 at 21:00
source share

Could you capture the schema to delete the table and recreate it?

+7
May 18, '09 at 19:24
source share

The best way I found for this in MySQL is:

 DELETE from table_name LIMIT 1000; 

Or 10,000 (depending on how quickly this happens).

Put this in a loop until all rows are deleted.

Please try this, as it will really work. It will take some time, but it will work.

+4
May 19, '09 at 3:11
source share

drop table should be the fastest way to get rid of it.

+3
May 18 '09 at 19:25
source share

Have you tried to use "drop"? I used it on tables over 20 GB and it always ends in a few seconds.

+2
May 18, '09 at 19:25
source share

If you just want to completely get rid of the table, why not just delete it?

+1
May 18 '09 at 19:27
source share

Truncation occurs quickly, usually an order of seconds or less. If this took 30 minutes, you probably had a case with some foreign keys referencing the table that you truncated. Blocking issues may also occur.

Truncate is efficiently effective because you can delete a table, but you may have to delete links to foreign keys if you do not want these tables to be cleaned as well.

+1
May 19 '09 at 20:28
source share

We had these problems. We no longer use the database as a session store with Rails 2.x and a cookie store. However, dropping a table is a worthy solution. You might want to consider stopping the mysql service, temporarily disabling logging, start everything in safe mode, and then take a snapshot / create. When finished, enable logging again.

0
May 19 '09 at 2:08 a.m.
source share

I'm not sure why it takes so long. But maybe try renaming and re-create an empty table. Then you can abandon the "extra" table without worrying about how long it takes.

0
May 19 '09 at 20:39
source share

The searlea answer is nice, but as stated in the comments, you lose foreign keys during the battle. this solution is similar: truncate is executed within a second, but you save foreign keys.

The trick is that we disabled / activated the FK checks.

 SET FOREIGN_KEY_CHECKS=0; CREATE TABLE NewFoo LIKE Foo; insert into NewFoo SELECT * from Foo where What_You_Want_To_Keep truncate table Foo; insert into Foo SELECT * from NewFoo; SET FOREIGN_KEY_CHECKS=1; 



Extended answer - Delete all but some lines

My problem: due to a crazy script, there were 7,000,000 unwanted rows in my table. I needed to delete 99% of the data in this table, so I needed to copy what I want to save in the tmp table before deleting.

These Foo rows that I need to save depend on other tables that have foreign keys and indexes.

something like that:

 insert into NewFoo SELECT * from Foo where ID in ( SELECT distinct FooID from TableA union SELECT distinct FooID from TableB union SELECT distinct FooID from TableC ) 

but this request was always turned off after 1 hour. Therefore, I had to do it as follows:

 CREATE TEMPORARY TABLE tmpFooIDS ENGINE=MEMORY AS (SELECT distinct FooID from TableA); insert into tmpFooIDS SELECT distinct FooID from TableB insert into tmpFooIDS SELECT distinct FooID from TableC insert into NewFoo SELECT * from Foo where ID in (select ID from tmpFooIDS); 
Theory

I, because the indexes are configured correctly, I think that both methods of filling NewFoo should have been the same, but it was not practical.

That is why in some cases you can do the following:

 SET FOREIGN_KEY_CHECKS=0; CREATE TABLE NewFoo LIKE Foo; -- Alternative way of keeping some data. CREATE TEMPORARY TABLE tmpFooIDS ENGINE=MEMORY AS (SELECT * from Foo where What_You_Want_To_Keep); insert into tmpFooIDS SELECT ID from Foo left join Bar where OtherStuff_You_Want_To_Keep_Using_Bar insert into NewFoo SELECT * from Foo where ID in (select ID from tmpFooIDS); truncate table Foo; insert into Foo SELECT * from NewFoo; SET FOREIGN_KEY_CHECKS=1; 
0
Mar 02 2018-02-17T00: 00Z
source share



All Articles