An โ€œofficialโ€ docker backup strategy - how about consistency?

The proposed strategy for managing and backing up data in the docker looks something like this:

docker run --name mysqldata -v /var/lib/mysql busybox true docker run --name mysql --volumes-from mysqldata mysql docker run --volumes-from mysqldata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql 

However, when I back up the containers this way, I wonโ€™t get a consistent backup, right? I know tools like mysqldump , but what if I need to make a backup, for example, a folder where files are constantly added and deleted?

+5
source share
1 answer

The main problem you are facing, i.e. backup of mutable files, does not depend on docker. Use a tool like rsnapshot or dirvish to make backups in the volume, and then use the approach described above to move these backups somewhere safer like Amazon s3 or glacier, based on your reliability requirement.

If you mount volumes from another container or vm host using the -v switch, file changes are reflected in all containers (or vm host) more or less in real time. (There is some delay due to AUFS that docker uses over fs hosts, but not so much). If the backup container is running continuously, it can continue to make backups, and the files will always reflect the latest files seen by the mysql container.

Edit: for clarity.

+1
source

Source: https://habr.com/ru/post/1207457/


All Articles