Sometimes I need to copy a MySQL database (db1) to another database (db2). I found this command concise and effective:
mysqldump --opt db1 | mysql db2
It worked fine, but now it breaks with the following error:
ERROR 1064 (42000) on line 1586: You have an error in the SQL syntax; check the manual that matches the MySQL server version for the correct syntax to use next to "mysqldump: failed to execute" SHOW TRIGGERS LIKE 'some_table_name' ': MySQL server on line 1
The first thing that comes to mind is that the database is too large (an uncompressed SQL dump is> 1G, 1090526011 bytes at the moment, to be precise) for laying it this way. When I do mysqldump > file and then mysql < file , it works fine, no errors. The table indicated in the error message (some_table_name) is not large or special.
The second idea is that the error message may be truncated, and that it says
"... MySQL server is gone"
A quick study of this issue indicates the possible achievement of the maximum number of open files (for MySQL and / or the system). So I tried adding --skip-lock-table to mysqldump and raising open-files-limit , but with no luck, same error.
The obvious solution is to dump and then import (how well it works), but the pipeline seems to be better and cleaner (let me know if I'm wrong), plus I'm curious to know what causes this problem. Did I hit any limit affecting the command line?
I did this on the hosting server using MySQL 5.1.60 on Linux and on my dev machine - MySQL 5.1.58 on Linux. Latter gives a slightly different error:
mysqldump: Error 2013: a lost connection to the MySQL server during a query while unpacking the other_table_name table in line: 7197
UPDATE: the problem is solved by a separate dump and import, without a pipe. Despite the fact that I feel that this does not really answer my question, ssmusoke's suggestions were most accurate, which led to the accepted answer.