Automatically sync two Amazon S3 buckets besides s3cmd?

Is there another automatic way to sync two Amazon S3 bucket besides using s3cmd? Is Amazon possible? The environment is Linux, and every day I want to synchronize new and deleted files with another bucket. I hate the idea of ​​keeping all the eggs in one basket.

+6
source share
4 answers

You can use the standard Amazon CLI for synchronization. You just need to do something like:

aws s3 sync s3://bucket1/folder1 s3://bucket2/folder2 

http://aws.amazon.com/cli/

+12
source

I am looking for something similar, and there are several options:

  • Commercial applications such as: s3RSync . In addition, CloudBerry for S3 provides Powershell extensions for Windows that you can use to create scripts, but I know that you are using * nix.
  • API AWS + (Fav Language) + Cron. (listen to me). It will take a decent person who does not have experience in AWS libraries in a short time to create something to copy and compare files (using the ETag function for the s3 keys). Simply providing the source / target bucket, creds and iterations using the keys and issuing your own Copy command in AWS. I used Java. If you use Python and Cron, you can do a little work of a useful tool.

I'm still looking for something already built with open source or for free. But number 2 is really not a very difficult task.

EDIT . I returned to this post and realized that today Attunity CloudBeam is also a commercial solution for many people.

+4
source

You can now replicate objects between buckets in two different areas through the AWS console.

An official AWS blog post explains this feature.

+2
source

S3 buckets! = Baskets

From your site :

Data Durability and Reliability

Amazon S3 provides a highly profitable storage infrastructure designed for mission-critical and primary storage. Objects are backed up on multiple devices at multiple sites in the Amazon S3 region. To ensure longevity, Amazon S3 PUT and COPY operations synchronously store your data on multiple sites before returning to SUCCESS. After storage, Amazon S3 maintains the longevity of your facilities by quickly detecting and recovering lost redundancy. Amazon S3 also regularly checks the integrity of data stored using checksums. If corruption is detected, it is restored using redundant data. In addition, Amazon S3 calculates checksums of all network traffic to detect data packet corruption during data storage or retrieval.

Amazon S3s Standard Storage:

  • Supported by the Amazon S3 Service Level Agreement
  • Designed to provide 99.999999999% longevity and 99.99% accessibility for a given year.
  • Designed for simultaneous data loss at two sites.

Amazon S3 provides added protection through Versioning. You can use Versioning to save, retrieve, and restore each version of each item stored in your Amazon S3 bucket. This makes it easy to recover from both inadvertent user actions and application crashes. By default, requests will receive the most recent version written. Older versions of the object can be obtained by specifying the version in the request. Storage rates apply to each saved version.

It is very reliable.

+1
source

Source: https://habr.com/ru/post/897616/


All Articles