Can you use a MySQL query to fully create a copy of the database

I have a LIVE version of a MySQL database with 5 tables and a TEST version.

I constantly use phpMyAdmin to make a copy of each table in the LIVE version with the TEST version.

Does anyone have a mysql query statement to create a full copy of a database? The query string should take into account the structure, data, values โ€‹โ€‹of automatic growth, and any other things related to the tables that need to be copied.

Thanks.

+6
source share
7 answers

Well, after a lot of research, searching the Internet and reading all the comments here, I released the following script - which I now run from the browser address bar. Tested it, and it does exactly what I need for this. Thanks for helping everyone.

<?php function duplicateTables($sourceDB=NULL, $targetDB=NULL) { $link = mysql_connect('{server}', '{username}', '{password}') or die(mysql_error()); // connect to database $result = mysql_query('SHOW TABLES FROM ' . $sourceDB) or die(mysql_error()); while($row = mysql_fetch_row($result)) { mysql_query('DROP TABLE IF EXISTS `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error()); mysql_query('CREATE TABLE `' . $targetDB . '`.`' . $row[0] . '` LIKE `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error()); mysql_query('INSERT INTO `' . $targetDB . '`.`' . $row[0] . '` SELECT * FROM `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error()); mysql_query('OPTIMIZE TABLE `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error()); } mysql_free_result($result); mysql_close($link); } // end duplicateTables() duplicateTables('liveDB', 'testDB'); ?> 
+4
source

Depending on your access to the server. I suggest using the direct mysql and mysqldump commands. What all phpMyAdmin does under the hood.

+2
source

There is a PHP class for this, I have not tested it yet.

From the description:

 This class can be used to backup a MySQL database. It queries a database and generates a list of SQL statements that can be used later to restore the database **tables structure** and their contents. 

I think this is what you need.

+1
source

Hi, you can use a simple bash script to backup the entire database.

 ######### SNIP BEGIN ########## ## Copy from here ############# #!/bin/bash # to use the script do following: # sh backup.sh DBNAME | sh # where DBNAME is database name from alma016 # ex Backuping mydb data: # sh backup.sh mydb hostname username pass| sh echo "#sh backup.sh mydb hostname username pass| sh" DB=$1 host=$2 user=$3 pass=$4 NOW=$(date +"%m-%d-%Y") FILE="$DB.backup.$NOW.gz" # rest of script #dump command: cmd="mysqldump -h $host -u$user -p$pass $DB | gzip -9 > $FILE" echo $cmd ############ END SNIP ########### 

EDIT

If you like to clone the backup database, just edit the dump and change the database name, and then:

  tar xzf yourdump.tar.gz| mysql -uusername -ppass 

welcomes Arman.

0
source

Well shaped script, you can try using

CREATE TABLE ... LIKE syntax repeated through a list of tables, which you can get from SHOW TABLES .

The only problem is that indexes or foreign keys are not recreated initially. Therefore, you will have to list them and create them. Then a few INSERT ... SELECT queries for data entry.

If your schema never changes, only data. Then create a script that replicates the structure of the table, and then just execute the INSERT ... SELECT business in the transaction.

Otherwise, mysqldump , as others say, is pretty easy to get work from a script. I have a daily cron job that dumps all databases from my data center servers, connects via FTPS to my location and sends all dumps. This can be done quite efficiently. Obviously, you have to make sure that such objects are locked, but again, not too hard.


According to the code request

The code is proprietary, but I will show you the critical section that you need. This is from the middle of the foreach , so the continue statements and the $c.. prefixes (I use this to indicate the variables of the current loop (or instance)). The echo can be whatever you want, this is a cron script, so repeating the current state was appropriate. The flush() lines are useful when running a script in a browser, since the output will be sent to this point, so the browser results will be populated as they run, and not all at the end. The ftp_fput() obviously comes down to my situation of loading a dump somewhere and loading directly from a pipe - you can use another process open to output output to the mysql process for database replication. Providing appropriate amendments, if any.

 $cDumpCmd = $mysqlDumpPath . ' -h' . $dbServer . ' -u' . escapeshellarg($cDBUser) . ' -p' . escapeshellarg($cDBPassword) . ' ' . $cDatabase . (!empty($dumpCommandOptions) ? ' ' . $dumpCommandOptions : ''); $cPipeDesc = array(0 => array('pipe', 'r'), 1 => array('pipe', 'w'), 2 => array('pipe', 'w')); $cPipes = array(); $cStartTime = microtime(true); $cDumpProc = proc_open($cDumpCmd, $cPipeDesc, $cPipes, '/tmp', array()); if (!is_resource($cDumpProc)) { echo "failed.\n"; continue; } else { echo "success.\n"; } echo "DB: " . $cDatabase . " - Uploading Database..."; flush(); $cUploadResult = ftp_fput($ftpConn, $dbFileName, $cPipes[1], FTP_BINARY); $cStopTime = microtime(true); if ($cUploadResult) { echo "success (" . round($cStopTime - $cStartTime, 3) . " seconds).\n"; $databaseCount++; } else { echo "failed.\n"; continue; } $cErrorOutput = stream_get_contents($cPipes[2]); foreach ($cPipes as $cFHandle) { fclose($cFHandle); } $cDumpStatus = proc_close($cDumpProc); if ($cDumpStatus != 0) { echo "DB: " . $cDatabase . " - Dump process caused an error:\n"; echo $cErrorOutput . "\n"; continue; } flush(); 
0
source

If you are using linux or mac, here is one line to clone the database.

 mysqldump -uUSER -pPASSWORD -hsample.host --single-transaction --quick test | mysql -uUSER -pPASSWORD -hqa.sample.host --database=test 

The โ€œadvantageโ€ here is that it locks the database when it is created. This means that you get a consistent copy. It also means that your production base will be tied to the duration of the copy, which is usually not very good.

Without locks or transactions, if something is written to the database during the creation of the copy, you may end up losing orphaned data in your copy.

In order to get a good copy without affecting performance, you must create a slave on another server. The slave is updated in real time. You can run the same command on a slave without affecting performance.

0
source

Source: https://habr.com/ru/post/888067/


All Articles