MYSQLDUMP error. Failed to execute SHOW TRIGGERS LIKE errors such as (Errcode: 13) (6) and (1036)

Does anyone know why MYSQLDUMP will perform a partial backup of the database at startup with the following statement:

"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" databaseSchema -u root --password=rootPassword > c:\backups\daily\mySchema.dump 

Sometimes a full backup is performed, at other times the backup stops after only a part of the database is turned on. This proportion is variable.

The database has several thousand tables with a total value of about 11 GB. But most of these tables are quite small, with about 1,500 entries in total, many of which have only 150 to 200 entries. The number of columns in these tables may be in the hundreds, but because of the stored frequency data.

But I was told that the number of tables in the schema in MySQL is not a problem. There are also no performance issues during normal operation.

And the alternative of using a single table is not very viable, because all of these tables have different column name signatures.

I must add that the database is used during backup.

Good after starting the backup with a set of commands:

 "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" mySchema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\mySchema_error.log > c:\backups\daily\mySchema.dump 

I get this:

 mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': Error on delete of 'C:\Windows\TEMP\#sql67c_10_8c5.MYI' (Errcode: 13) (6) 

I think this is a permissions issue.

I doubt that one table in my schema is in the 2GB range.

I am using MySQL Server 5.5 on a 64 bit Windows 7 server with 8 GB of memory.

Any ideas?

I know that changing the number of files that MySQL can open, the open_files_limit parameter, can cure this question.

Another possibility is the intervention of antivirus products, as described here:

How To Fix MySQL Errcode Interrupt Errors 13 Errors in Windows

+6
source share
1 answer

There are several possibilities for this problem that I encountered, and here is my work:

First: enable error logging / debugging and / or verbose output, otherwise we will not know about an error that could create a problem:

  "c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualRSS.dump 

While debugging is enabled in your distribution, you should now both be able to write errors to a file, as well as view the output to the console. The problem here is not always obvious, but this is the first step.

Have you looked at your mistake or general magazines? Not often useful information for this problem, but sometimes there is, and every little bit helps to track these problems.

Also see SHOW PROCESSLIST while you use it. See if you see status columns, such as: WAITING FOR..LOCK/METADATA LOCK , which indicates that the operation cannot obtain a lock due to another operation.

Depending on the information gathered above: Assuming I didn't find anything and I had to shoot the blind, here's what I will do with some common cases that I experienced:

  • Maximum package size errors: If you get the error max-allowed-package-size, then you can add --max_allowed_packet=160M to your parameters to find out if you can make it big enough:

"c: \ path \ to \ mysqldump" -b yourdb -u root -pRootPasswd -v -debug-check --log-error = c: \ backup \ mysqldump_error.log - max_allowed_packet = 160M > c: \ backup \ visualRSS. dump

  • Try reducing runtime / size using the -compact flag. mysqldump will add everything you need to create the schema and insert the data along with other information: you can significantly reduce the execution time and file size by simply requiring the dump to contain only INSERTS in your schema and avoiding all operators to create the schema and other non-critical information within ea . insert.This can mitigate many problems, suitable for use, but you will want to use a separate dump with --nodata to export your ea schema. to create all empty tables, etc.

/ Create the source data, exclude the table of allowances, comments, locks and key checks / "c: \ path \ to \ mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error = c: \ backup \ mysqldump_error.log - compact > c: \ backup \ visualRSS.dump

/ Create a dump scheme without data: / "c: \ path \ to \ mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error = c: \ backup \ mysqldump_error.log - nodata Strong → c: \ backup \ visualRSS.dump

  • Blocking Issues . By default, mysqldump uses a LOCK TABLE (if you did not specify a single transaction) to read the table during its reset and wants to get a read lock on the table, DDL operations and your global lock type can create this case. Without seeing the prompted request, you will usually see a small backup file size, as you described, and usually the mysqldump operation will sit until you kill it or the server closes the downtime. You can use the --single-transaction flag to set the REPEATABLE READ type for the transaction, to essentially take a snapshot of the table without blocking operations or to be a block saved for some old server that has problems with ALTER / TRUNCATE TABLE in while in this mode.

  • Problems with FileSize: If I read incorrectly that this HAS NOT backup was successfully completed earlier, the 2GB file size is indicated as a potential problem, you can try mysqldump output directly to something like 7zip on the fly:

    mysqldump | 7z.exe a -si name_in_outfile output_path_and_filename

If you are still having problems or there is an inevitable problem that prohibits the use of mysqldump. Percona XtraBackup is what I prefer, or there is Enterprise Backup for MySQL from Oracle. It is open source, much more versatile than mysqldump, has a very reliable group of developers working on it, and has many great features that mysqldump does not have, such as streaming / hot backups, etc. Unfortunately, window building is out of date unless you can compile from a binary file or run a local Linux virtual machine to handle this for you.

Very important . I noticed that you are not backing up your information_schema table, this should be mentioned only if it matters to your backup scheme.

+1
source

Source: https://habr.com/ru/post/951468/


All Articles