在 MySQL 中删除数百万行

我最近发现并修复了我正在工作的一个站点中的一个错误,这个错误导致了表中数百万行数据的重复,即使没有这些数据,这些数据也会相当大(仍然有数百万行)。我可以很容易地找到这些重复的行,并且可以运行一个删除查询来将它们全部删除。问题在于,试图一次性删除这么多行会长时间锁定表,如果可能的话,我希望避免这种情况。在不关闭站点(通过锁定表)的情况下,我能看到的消除这些行的唯一方法是:

  1. 编写一个脚本,在一个循环中执行数千个较小的删除查询。这在理论上可以解决锁定表的问题,因为其他查询可以将其放入队列并在删除操作之间运行。但是它仍然会在数据库上增加很多负载,并且需要很长时间才能运行。
  2. 重命名表并重新创建现有的表(它现在是空的)。然后清理重命名的桌子。重命名新表,将旧表命名回来,并将新行合并到重命名的表中。这种方法需要相当多的步骤,但是应该能够以最小的中断完成工作。这里唯一棘手的部分是,该表是一个报告表,所以一旦它被重新命名的方式和空的一个放在它的地方所有历史性的报告离开,直到我把它放回原位。另外,由于所存储的数据类型不同,合并过程可能有些麻烦。总的来说,这是我现在最有可能的选择。

我只是想知道是否有其他人以前有过这个问题,如果有的话,你是如何在不关闭网站的情况下处理这个问题的,并且,如果有希望的话,尽量减少对用户的干扰?如果我使用第二种方法,或者一种不同的、类似的方法,我可以安排这些东西在深夜运行,并在第二天早上早些时候进行合并,只是提前让用户知道,所以这没什么大不了的。我只是想看看有没有人有更好或者更简单的清理方法。

118199 次浏览
DELETE FROM `table`
WHERE (whatever criteria)
ORDER BY `id`
LIMIT 1000

Wash, rinse, repeat until zero rows affected. Maybe in a script that sleeps for a second or three between iterations.

Do it in batches of lets say 2000 rows at a time. Commit in-between. A million rows isn't that much and this will be fast, unless you have many indexes on the table.

I'd also recommend adding some constraints to your table to make sure that this doesn't happen to you again. A million rows, at 1000 per shot, will take 1000 repetitions of a script to complete. If the script runs once every 3.6 seconds you'll be done in an hour. No worries. Your clients are unlikely to notice.

I'd use mk-archiver from the excellent Maatkit utilities package (a bunch of Perl scripts for MySQL management) Maatkit is from Baron Schwartz, the author of the O'Reilly "High Performance MySQL" book.

The goal is a low-impact, forward-only job to nibble old data out of the table without impacting OLTP queries much. You can insert the data into another table, which need not be on the same server. You can also write it to a file in a format suitable for LOAD DATA INFILE. Or you can do neither, in which case it's just an incremental DELETE.

It's already built for archiving your unwanted rows in small batches and as a bonus, it can save the deleted rows to a file in case you screw up the query that selects the rows to remove.

No installation required, just grab http://www.maatkit.org/get/mk-archiver and run perldoc on it (or read the web site) for documentation.

According to the mysql documentation, TRUNCATE TABLE is a fast alternative to DELETE FROM. Try this:

TRUNCATE TABLE table_name

I tried this on 50M rows and it was done within two mins.

Note: Truncate operations are not transaction-safe; an error occurs when attempting one in the course of an active transaction or active table lock

the following deletes 1,000,000 records, one at a time.

 for i in `seq 1 1000`; do
mysql  -e "select id from table_name where (condition) order by id desc limit 1000 " | sed 's;/|;;g' | awk '{if(NR>1)print "delete from table_name where id = ",$1,";" }' | mysql;
done

you could group them together and do delete table_name where IN (id1,id2,..idN) im sure too w/o much difficulty

I had a use case of deleting 1M+ rows in the 25M+ rows Table in the MySQL. Tried different approaches like batch deletes (described above).
I've found out that the fastest way (copy of required records to new table):

  1. Create Temporary Table that holds just ids.

CREATE TABLE id_temp_table ( temp_id int);

  1. Insert ids that should be removed:

insert into id_temp_table (temp_id) select.....

  1. Create New table table_new

  2. Insert all records from table to table_new without unnecessary rows that are in id_temp_table

insert into table_new .... where table_id NOT IN (select distinct(temp_id) from id_temp_table);

  1. Rename tables

The whole process took ~1hr. In my use case simple delete of batch on 100 records took 10 mins.

For us, the DELETE WHERE %s ORDER BY %s LIMIT %d answer was not an option, because the WHERE criteria was slow (a non-indexed column), and would hit master.

SELECT from a read-replica a list of primary keys that you wish to delete. Export with this kind of format:

00669163-4514-4B50-B6E9-50BA232CA5EB
00679DE5-7659-4CD4-A919-6426A2831F35

Use the following bash script to grab this input and chunk it into DELETE statements [requires bash ≥ 4 because of mapfile built-in]:

sql-chunker.sh (remember to chmod +x me, and change the shebang to point to your bash 4 executable):

#!/usr/local/Cellar/bash/4.4.12/bin/bash


# Expected input format:
: <<!
00669163-4514-4B50-B6E9-50BA232CA5EB
00669DE5-7659-4CD4-A919-6426A2831F35
!


if [ -z "$1" ]
then
echo "No chunk size supplied. Invoke: ./sql-chunker.sh 1000 ids.txt"
fi


if [ -z "$2" ]
then
echo "No file supplied. Invoke: ./sql-chunker.sh 1000 ids.txt"
fi


function join_by {
local d=$1
shift
echo -n "$1"
shift
printf "%s" "${@/#/$d}"
}


while mapfile -t -n "$1" ary && ((${#ary[@]})); do
printf "DELETE FROM my_cool_table WHERE id IN ('%s');\n" `join_by "','" "${ary[@]}"`
done < "$2"

Invoke like so:

./sql-chunker.sh 1000 ids.txt > batch_1000.sql

This will give you a file with output formatted like so (I've used a batch size of 2):

DELETE FROM my_cool_table WHERE id IN ('006CC671-655A-432E-9164-D3C64191EDCE','006CD163-794A-4C3E-8206-D05D1A5EE01E');
DELETE FROM my_cool_table WHERE id IN ('006CD837-F1AD-4CCA-82A4-74356580CEBC','006CDA35-F132-4F2C-8054-0F1D6709388A');

Then execute the statements like so:

mysql --login-path=master billing < batch_1000.sql

For those unfamiliar with login-path, it's just a shortcut to login without typing password in the command line.

I faced a similar problem. We had a really big table, about 500 GB in size with no partitioning and one only one index on the primary_key column. Our master was a hulk of a machine, 128 cores and 512 Gigs of RAM and we had multiple slaves too. We tried a few techniques to tackle the large-scale deletion of rows. I will list them all here from worst to best that we found-

  1. Fetching and Deleting one row at a time. This is the absolute worst that you could do. So, we did not even try this.
  2. Fetching first 'X' rows from the database using a limit query on the primary_key column, then checking the row ids to delete in the application and firing a single delete query with a list of primary_key ids. So, 2 queries per 'X' rows. Now, this approach was fine but doing this using a batch job deleted about 5 million rows in 10 minutes or so, due to which the slaves of our MySQL DB were lagged by 105 seconds. 105-second lag in 10-minute activity. So, we had to stop.
  3. In this technique, we introduced a 50 ms lag between our subsequent batch fetch and deletions of size 'X' each. This solved the lag problem but we were now deleting 1.2-1.3 million rows per 10 minutes as compared to 5 million in technique #2.
  4. Partitioning the database table and then deleting the entire partitions when not needed. This is the best solution we have but it requires a pre-partitioned table. We followed step 3 because we had a non-partitioned very old table with only indexing on the primary_key column. Creating a partition would have taken too much time and we were in a crisis mode. Here are some links related to partitioning that I found helpful- Official MySQL Reference, Oracle DB daily partitioning.

So, IMO, if you can afford to have the luxury of creating a partition in your table, go for the option #4, otherwise, you are stuck with option #3.

I think the slowness is due to MySQl's "clustered index" where the actual records are stored within the primary key index - in the order of the primary key index. This means access to a record via the primary key is extremely fast because it only requires one disk fetch because the record on the disk is right there where it found the correct primary key in the index.

In other databases without clustered indexes the index itself does not hold the record but just an "offset" or "location" indicating where the record is located in the table file and then a second fetch must be made in that file to retrieve the actual data.

You can imagine that when deleting a record in a clustered index (like MySQL uses) all records above that record in the index (=table) must be moved downwards to avoid massive holes being created in the index (well that is what I recall from a few years ago at least - version 8.x may have improved this issue).

Armed with knowledge of the above 'under the hood' operations, what we discovered that really sped up deletes in MySQL 5.x was to perform the deletes in reverse order. This produces the least amount of record movement because you are deleting records from the end first, meaning that subsequent deletes have less records to relocate - logical right?!

I have not scripted anything to do this, and doing it properly would absolutely require a script, but another option is to create a new, duplicate table and select all the rows you want to keep into it. Use a trigger to keep it up-to-date while this process completes. When it is in sync (minus the rows you want to drop), rename both tables in a transaction, so that the new one takes the place of the old. Drop the old table, and voila!

This (obviously) requires a lot of extra disk space, and may tax your I/O resources, but otherwise, can be much faster.

Depending on the nature of the data or in an emergency, you could rename the old table and create a new, empty table in it's place, and select the "keep" rows into the new table at your leisure...

Here's the recommended practice:

rows_affected = 0
do {
rows_affected = do_query(
"DELETE FROM messages WHERE created < DATE_SUB(NOW(),INTERVAL 3 MONTH)
LIMIT 10000"
)
} while rows_affected > 0

Deleting 10,000 rows at a time is typically a large enough task to make each query efficient, and a short enough task to minimize the impact on the server4 (transactional storage engines might benefit from smaller transactions). It might also be a good idea to add some sleep time between the DELETE statements to spread the load over time and reduce the amount of time locks are held.

Reference MySQL High Performance

I had a really loaded base that needed to delete some older entries all the time. Some of the delete queries started to hang so I needed to kill them, and if there are too many deletes the whole base become unresponsive so I needed to restrict the parallel runs. So I've created a cron job running every minute starting this script:

#!/bin/bash


#######################
#
i_size=1000
max_delete_queries=10
sleep_interval=15
min_operations=8
max_query_time=1000


USER="user"
PASS="super_secret_password"


log_max_size=1000000
log_file="/var/tmp/clean_up.log"
#
#######################


touch $log_file
log_file_size=`stat -c%s "$log_file"`
if (( $log_file_size > $log_max_size ))
then
rm -f "$log_file"
fi


delete_queries=`mysql -u user -p$PASS -e  "SELECT * FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %';"| grep Query|wc -l`


## -- here the hanging DELETE queries will be stopped
mysql-u $USER -p$PASS -e "SELECT ID FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %'and TIME>$max_query_time;" |grep -v ID| while read -r id ; do
echo "delete query stopped on `date`" >>  $log_file
mysql -u $USER -p$PASS -e "KILL $id;"
done


if (( $delete_queries > $max_delete_queries ))
then
sleep $sleep_interval


delete_queries=`mysql-u $USER -p$PASS -e  "SELECT * FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %';"| grep Query|wc -l`


if (( $delete_queries > $max_delete_queries ))
then


sleep $sleep_interval


delete_queries=`mysql -u $USER -p$PASS -e  "SELECT * FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %';"| grep Query|wc -l`


# -- if there are too many delete queries after the second wait
#  the table will be cleaned up by the next cron job
if (( $delete_queries > $max_delete_queries ))
then
echo "clean-up skipped on `date`" >> $log_file
exit 1
fi
fi


fi


running_operations=`mysql-u $USER -p$PASS -p -e "SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';"| wc -l`


if (( $running_operations < $min_operations ))
then
# -- if the database is not too busy this bigger batch can be processed
batch_size=$(($i_size * 5))
else
batch_size=$i_size
fi


echo "starting clean-up on `date`" >>  $log_file


mysql-u $USER -p$PASS -e 'DELETE FROM big.table WHERE result_timestamp < UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 31 DAY))*1000 limit '"$batch_size"';'


if [ $? -eq 0 ]; then
# -- if the sql command exited normally the exit code will be 0
echo "delete finished successfully on `date`" >>  $log_file
else
echo "delete failed on `date`" >>  $log_file
fi

With this I've achieved around 2 million deletes per day which was ok for my usecase.

I have faced similar issue while deleting multiple records from transaction table after moving them to archival table.

I used to use temporary table to identify records to be deleted.

The temporary table that I used 'archive_temp' to store ids created in memory without any indexes.

Hence while deleting records from original transaction table as e.g. DELETE from tat where id in (select id from archive_temp); query used to return an error "LOST Connection to server"

I created index on that temporary table as follows after creating it: ALTER TABLE archive_temp ADD INDEX( id);

After this my delete query used to execute in less than seconds irrespective of number of records to be deleted from transaction table.

Hence it would be better to check indexes. Hope this might help.

I have had the same case earlier. There were more than 45 million duplicate data stored during database migration. Yeah, it happened. :)

What I did was:

  • Created a temporary table filtering only unique
  • Truncated the original table
  • Inserted back to the original table from the temporary table.
  • After making sure the data is correct, I deleted the temporary table.

Overall, it took around 2.5 minutes I guess.

Example:

CREATE TABLE mytable_temp AS SELECT * FROM my_original_table WHERE my_condition;
TRUNCATE TABLE my_original_table;
INSERT INTO my_original_table  SELECT * FROM mytable_temp;

This queries Delete a BIG TABLES in seconds:

CREATE TABLE <my_table_temp> LIKE <my_table>;

RENAME TABLE <my_table> TO <my_table_delete>;

RENAME TABLE <my_table_temp> TO <my_table>;

DROP TABLE <my_table_delete>;

Based on @rich's answer, I wrote this signe line command :

for i in {1..1000}; do mysql -vv --user=THE_USER --password=THE_PWD --host=YOUR_DB_HOST THE_DB_NAME -e "DELETE FROM THE_DB_NAME.THE_TABLE WHERE 'date' < NOW() - INTERVAL 4 MONTH LIMIT 10000;"; sleep 1; done;
  • -vv : displays the DELETE result, so I can check the deleted rows count
  • --host : I'm running the request in another server, so I have to define the mysql host address
  • 'date' : using simple quotes (and not `) allowed me to escape the column name
  • NOW() - INTERVAL 4 MONTH : delete only old entries (more than 4 months)
  • sleep 1 : wait on second to avoid crashing the server