I am trying to move the database table to another server; the complication is that the machine currently working with the table has little free space; therefore, I am looking for a solution that can work through the network.
I tried mysqldumping database from src machine and connected it to mysql in dest; but my database has 48 m rows and even when auto_commit and trx_commit cmd are disabled, up to 2; I get some slow dogs.
mysqldump -uuser -ppass --opt dbname dbtable | mysql -h remove.server -uuser -pass dbname
Then I tried mysqldump rows a million at a time; scp them to the dest machine and do mysql <file.sql, but it seemed to be progressively slower. I reached the 7th file (7,000,000) lines; and the next million imports took 240 minutes.
I chatted a bit and mysql suggests that using CSV LOAD IN FILE imports is ~ 20 times faster than inserts. So now I'm stuck.
I can decide how to export CSV using standard sql syntax:
SELECT *
INTO OUTFILE '/tmp/tmpfile'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
FROM table;
but this obviously doesnβt work, as it quickly chews on my already low disk space. So I was looking for a switch that allows mysqldump dump csv to stdout. From what I read, this is not possible. The only way I can do this is to create a FIFO and specify mysql for the dump, and then write a script that reads the FIFO at the same time and sends it to the dest server. Not quite sure of the syntax of synchronization with another server; which brings me to my next problem.
, mysql CSV stdout, ; dest? , csv dest, ; mysqlimport .
... :
mysqldump -uuser -ppass --opt dbname --tab /dev/stdout dbtable | mysqlimport -h remove.server -uuser -pass dbname
, mysqlimport ; .
, ;
FIFO, ; mysqlimport FIFO dest-? , , mysql , dest; src.
, dump mysql CSV stdout dest ( , dest).
!
Cheers,
: innodb; src , 10 .
UPDATE: sshfs dir dest src mysql csv - , . mysqlimport dest.
UPDATE:. dest - , INSERTS. 9- 12 . - . ?
: ... : http://forums.mysql.com/read.php?22,154964