Moving docker data volume containers between CoreOS hosts

For some scenarios, the clustered file system is too much. This, if I understood correctly, is an example of using a data volume container template . But even CoreOS needs updates from time to time. If I nevertheless would like to reduce application downtime, I would have to move the data volume container with the application container to another host, and the old host will be updated.

Are there best practices? The solution that is most often mentioned is to β€œbackup” the container with docker export on the old host and docker import on the new host. But this will include scp-ing tar files to another host. Is it possible to control using the fleet ?

+5
source share
1 answer

@brejoc, I would not call this a solution, but it can help:

Alternative 1: Use another OS that has clustering or at least does not prevent it. Now I'm experimenting with CentOS. 2: I have created several tools that help in some use cases. The first tool, extracts data from S3 (usually artifacts) and is unidirectional. The second tool, which I call the "backup volume container", has great potential in it, but requires some feedback. It provides two-way data backup / recovery, from / to many persistent data stores, including S3 (but also Dropbox, which is very cool). Since it is implemented now, when you start it for the first time, it restores the container. From this moment, it will track the corresponding folder in the container for changes, and after the changes (and after a quiet period) it will return to the permanent storage.

Backup volume container: https://registry.hub.docker.com/u/yaronr/backup-volume-container/ Sync files with S3: https://registry.hub.docker.com/u/yaronr/awscli/ ( docker run yaronr / awscli aws s3 etc. etc. - read aws docs)

+3
source

Source: https://habr.com/ru/post/1202193/


All Articles