Reliable backups for huge mysql databases?

I have a 200GB / 400Mrows mysql / innodb database - far beyond reason as I found out.

One of the amazing problems is restoring backups. mysqldump generates huge sql files and it takes them about a week to import back to the new database (trying to do it faster, like larger / smaller transactions, disconnecting keys during import, etc., compressing the network, etc. to so far, importing myisam seems 2x faster, but then there will be no transactions).

To make matters worse - and I hope to get some help with this - a network connection that transfers> 200 GB over a specific period of time of the week has a nontrivial chance of hacking, and the sql import process cannot be continued in any non-trivial way.

What would be the best way to handle this? Right now, if I notice a damaged connection, I will manually try to find out when it finished by checking the highest primary key of the last imported table, and then we get perlscript, which basically does this:

perl -nle 'BEGIN{open F, "prelude.txt"; @a=<F>; print @a; close F;}; print if $x; $x++ if /INSERT.*last-table-name.*highest-primary-key/'

This really is not the way, so what would be the best way?

+3
source share
3 answers

MySQL ? , , - NAS-, iSCSI. , , NAS, .

+1

mysqldump - , 200G, - .

- - , , rsync - .

- , . db, , , .

, , (, ) innodb, .

maatkit mk-parallel-dump restore , mysqldump, - 100% .


: , , + rsync, , ; , ( , ) , rsync, , , .

+1

Do you need everything in the database?

Can you move some of the information to the archive database and add something to your application that will allow people to view records in the archive,

Obviously, it depends a lot on your application and setup, but could it be a solution? Probably your database will only be larger.

0
source

Source: https://habr.com/ru/post/1730555/


All Articles