Mysqldump & gzip 命令使用 crontab 正确创建 MySQL 数据库的压缩文件

我在使用 crontab时遇到了问题。我想自动备份一个 MySQL 数据库。

设置:

  • Debian GNU/Linux 7.3(wheezy)
  • MySQL Server 版本: 5.5.33-0 + wheezy1(Debian)
  • 目录用户,备份和备份2有755权限
  • MySQLdb 和 Debian 帐户的用户名是相同的

在 shell 中,这个命令可以工作

mysqldump -u user -p[user_password] [database_name] | gzip > dumpfilename.sql.gz

当我使用 crontab-e 将其放在 crontab 中时

* * /usr/bin/mysqldump -u user -pupasswd mydatabase | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/dev/null 2>&1

每分钟在/home/user/backup 目录中创建一个文件,但是它的字节数为0。

但是,当我将这个输出重定向到第二个目录 backup2时,我注意到在其中创建了适当压缩的 mysqldump 文件。我无法确定我所犯的错误是什么,导致第一个目录中的0字节文件和第二个目录中的预期输出。

* * /usr/bin/mysqldump -u user -pupasswd my-database | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

我希望你能给我一个解释。

谢谢

284353 次浏览

You can use the tee command to redirect output:

/usr/bin/mysqldump -u user -pupasswd my-database | \
tee >(gzip -9 -c > /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz)  | \
gzip> /home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

see documentation here

First the mysqldump command is executed and the output generated is redirected using the pipe. The pipe is sending the standard output into the gzip command as standard input. Following the filename.gz, is the output redirection operator (>) which is going to continue redirecting the data until the last filename, which is where the data will be saved.

For example, this command will dump the database and run it through gzip and the data will finally land in three.gz

mysqldump -u user -pupasswd my-database | gzip > one.gz > two.gz > three.gz


$> ls -l
-rw-r--r--  1 uname  grp     0 Mar  9 00:37 one.gz
-rw-r--r--  1 uname  grp  1246 Mar  9 00:37 three.gz
-rw-r--r--  1 uname  grp     0 Mar  9 00:37 two.gz

My original answer is an example of redirecting the database dump to many compressed files (without double compressing). (Since I scanned the question and seriously missed - sorry about that)

This is an example of recompressing files:

mysqldump -u user -pupasswd my-database | gzip -c > one.gz; gzip -c one.gz > two.gz; gzip -c two.gz > three.gz


$> ls -l
-rw-r--r--  1 uname  grp  1246 Mar  9 00:44 one.gz
-rw-r--r--  1 uname  grp  1306 Mar  9 00:44 three.gz
-rw-r--r--  1 uname  grp  1276 Mar  9 00:44 two.gz

This is a good resource explaining I/O redirection: http://www.codecoffee.com/tipsforlinux/articles2/042.html

if you need to add a date-time to your backup file name (Centos7) use the following:

/usr/bin/mysqldump -u USER -pPASSWD DBNAME | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz

this will create the file: db.2017-11-17.231537.sql.gz

Personally, I have create a file.sh (right 755) in the root directory, file who do this job, on order of the crontab.

Crontab code:

10 2 * * * root /root/backupautomatique.sh

File.sh code:

rm -f /home/mordb-148-251-89-66.sql.gz #(To erase the old one)

mysqldump mor | gzip > /home/mordb-148-251-89-66.sql.gz (what you have done)

scp -P2222 /home/mordb-148-251-89-66.sql.gz root@otherip:/home/mordbexternes/mordb-148-251-89-66.sql.gz

(to send a copy somewhere else if the sending server crashes, because too old, like me ;-))

Besides m79lkm's solution, my 2 cents on this topic:

Don't directly pipe | the result into gzip but first dump it as a .sql file, and then gzip it.
So go for && gzip instead of | gzip if you have the free disk space.

The dump itself will be double as fast but you will need a lot more free disk space. Your tables will be locked for less time so less downtime/slow-responding of your application. The end result will be exactly the same.

So very important is to check for free disk space first with
df -h

And then execute your dump like this:

mysqldump -u user-p [database_name] > dumpfilename.sql && gzip dumpfilename.sql

Another good tip is to use the option --single-transaction. It prevents the tables being locked but still result in a solid backup. So you might consider to use that option. See docs here. And since this does not lock your tables for most queries you can actually pipe the dump | directly in gzip... (in case you don't have the free disk space)

mysqldump --single-transaction -u user -p [database_name] > dumpfilename.sql && gzip dumpfilename.sql