How can I safely back up a huge database?

I need to backup the Drupal database, it is huge. Thus, it has over 1,500 tables (don't blame me, its a Drupal thing) and has a size of 10 GB.

I could not do this using PHPMyAdmin, I just got an error message when it started to create a .sql file.

I want to make sure that I don’t break anything or take the server or anything else when I try to support it.

I was going to try mysqldump on my server and then copy the file locally, but realized that this could cause unexpected problems. So, my question to you is: is it safe to use mysqldump on so many tables at once and even if it is safe, are there any problems that such a huge file could lead to in the future for database recovery?

Thanks for the input guys.

+6
source share
2 answers

Is it safe to use mysqldump on so many tables at once

I perform daily backups with mysqldump on servers literally 10 times more: 15000+ tables, more than 100 GB.

If you have not examined the contents of a file created using mysqldump ..., you should, because to output it, you need to understand why it is an intrinsically safe backup utility:

Backups are human-readable and consist entirely of the necessary SQL statements to create the database, just like a backup.

In this form, their contents are easily manipulated using ubiquitous tools such as sed and grep and perl , which can be used to cut out only one table from a file to be restored, for example.

If recovery failed, the error will indicate the line number in the file where the error occurred. This is usually due to an error in the version of the server on which the backup was created (for example, MySQL Server 5.1 allowed creating views in some situations when the server itself did not accept the output of its own SHOW CREATE VIEW statement.) The statement was not considered - one the same server - to be a valid definition of the form, but this was not a defect in mysqldump or in the backup file as such.)

Restoring from a backup created by mysqldump is not lightning fast, since the server must execute all these SQL queries, but from a security point of view, I would say that there is no safer alternative, since this is a canonical backup tool and any errors will most likely be found and fixed due to a large user base, if nothing else.

Do not use the --force , except in emergency situations. This will force the backup to skip any errors that occur on the server during the backup, as a result of which the backup will be incomplete without warning. Instead, find and correct the errors that have occurred. Typical errors during backup are related to views that are no longer valid because they refer to tables or columns that were renamed or deleted, or when the user who originally created the view was deleted from the server. Correct them by changing the view correctly.

First of all, check your backups by restoring them on another server. If you have not done so already, you really do not have backups.

The output file can be compressed, usually with gzip / pigz, bzip2 / bpzip2, xz / pixz or zpaq. They are listed in approximate order by the amount of space saved (gzip keeps the smallest value, zpaq keeps the maximum) and speed (gzip is the fastest, zpaq is the slowest). pigz, pbzip2, pixz and zpaq will use multiple cores if you have one. The rest can only use one core at a time.

+10
source

Use mysqlhotcopy , it works well with large databases

  • Work only MyISAM and ARCHIVE tables.
  • Work only on the server where the database is stored.
  • This utility is deprecated in MySQL 5.6.20 and removed in MySQL 5.7
+1
source

Source: https://habr.com/ru/post/989993/


All Articles