Mysqldump pipeline for mysql

Sometimes I need to copy a MySQL database (db1) to another database (db2). I found this command concise and effective:

mysqldump --opt db1 | mysql db2 

It worked fine, but now it breaks with the following error:

ERROR 1064 (42000) on line 1586: You have an error in the SQL syntax; check the manual that matches the MySQL server version for the correct syntax to use next to "mysqldump: failed to execute" SHOW TRIGGERS LIKE 'some_table_name' ': MySQL server on line 1

The first thing that comes to mind is that the database is too large (an uncompressed SQL dump is> 1G, 1090526011 bytes at the moment, to be precise) for laying it this way. When I do mysqldump > file and then mysql < file , it works fine, no errors. The table indicated in the error message (some_table_name) is not large or special.

The second idea is that the error message may be truncated, and that it says

"... MySQL server is gone"

A quick study of this issue indicates the possible achievement of the maximum number of open files (for MySQL and / or the system). So I tried adding --skip-lock-table to mysqldump and raising open-files-limit , but with no luck, same error.

The obvious solution is to dump and then import (how well it works), but the pipeline seems to be better and cleaner (let me know if I'm wrong), plus I'm curious to know what causes this problem. Did I hit any limit affecting the command line?

I did this on the hosting server using MySQL 5.1.60 on Linux and on my dev machine - MySQL 5.1.58 on Linux. Latter gives a slightly different error:

mysqldump: Error 2013: a lost connection to the MySQL server during a query while unpacking the other_table_name table in line: 7197


UPDATE: the problem is solved by a separate dump and import, without a pipe. Despite the fact that I feel that this does not really answer my question, ssmusoke's suggestions were most accurate, which led to the accepted answer.

+4
source share
5 answers

The problem may be that the load on the servers becomes too high, while performing both a reset and a load. It also means that you lose some optimizations, such as advanced inserts, the ability to disable foreign keys that can be achieved when the file is uploaded, and then import it.

I would recommend you use mysqldump to create a backup, and then load it using mysql. Thus, the load on your server is reduced, and, as you said, it always works. You can even automate it in a bash script to make it so that you do not need to execute mysqldump and load commands.

+2
source

"MySQL server is gone" is a sign of a maximum packet error. http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

Modify your command to specify a larger value for max_allowed_packet.

 mysqldump --opt db1 | mysql --max_allowed_packet=32M db2 

The default is 1M. Trial and error may be required to get the correct value. http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet

+3
source

Do you need to redirect stderr stream as well as stdout from mysqldump? Error messages may alternate with dump output. Try

mysqldump --opt db1 | mysql db2

+2
source

The problem is that you are redirecting stderr to stdout, so any errors are interpreted as SQL. Delete 2> and 1. Then a real error will appear.

+2
source

it is possible that the backup hits MySQL timeout limits.

Variables can be changed in my.cnf

net_read_timeout = 120 net_write_timeout = 900

If you prefer to change these parameters without restarting MySQL, you can do this with the following SQL statements:

set global net_read_timeout = 120; set global net_write_timeout = 900;

^ you may need super privilege

0
source

Source: https://habr.com/ru/post/1399108/


All Articles