AWS S3 bucket 的备份策略

我正在寻找一些建议或最佳实践来备份 S3桶。
备份中三资料的目的是防止因下列原因而遗失资料:

  1. 中三课程
  2. 我不小心从 S3中删除了这些数据

经过一些调查,我看到了以下选择:

  1. 使用版本控制 http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
  2. 使用 AWS SDK 从一个 S3存储桶复制到另一个存储桶
  3. 亚马逊冰川的备份
  4. 备份到生产服务器,它本身就是备份的

我应该选择什么选项,以及只在 S3上存储数据的安全性如何?想听听你的意见。
一些有用的连结:

94209 次浏览

Taking into account the related link, which explains that S3 has 99.999999999% durability, I would discard your concern #1. Seriously.

Now, if #2 is a valid use case and a real concern for you, I would definitely stick with options #1 or #3. Which one of them? It really depends on some questions:

  • Do you need any other of the versioning features or is it only to avoid accidental overwrites/deletes?
  • Is the extra cost imposed by versioning affordable?
  • Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable. Is this OK for you?

Unless your storage use is really huge, I would stick with bucket versioning. This way, you won't need any extra code/workflow to backup data to Glacier, to other buckets, or even to any other server (which is really a bad choice IMHO, please forget about it).

You can backup your S3 data using the following methods

  1. Schedule backup process using AWS datapipeline ,it can be done in 2 ways mentioned below:

    a. Using copyActivity of datapipeline using which you can copy from one s3 bucket to another s3 bucket.

    b. Using ShellActivity of datapipeline and "S3distcp" commands to do the recursive copy of recursive s3 folders from bucket to another (in parallel).

  2. Use versioning inside the S3 bucket to maintain different version of data

  3. Use glacier for backup your data ( use it when you don't need to restore the backup fast to the original buckets(it take some time to get back the data from glacier as data is stored in compressed format) or when you want to save some cost by avoiding to use another s3 bucket fro backup), this option can easily be set using the lifecycle rule on the s3 bucket fro which you want to take backup.

Option 1 can give you more security let say in case you accidentally delete your original s3 bucket and another benefit is that you can store your backup in datewise folders in another s3 bucket, this way you know what data you had on a particular date and can restore a specific date backup . It all depends on you use case.

You'd think there would be an easier way by now to just hold some sort of incremental backups on a diff region.

All the suggestions above are not really simple or elegant solutions. I don't really consider glacier an option as I think thats more of an archival solution then a backup solution. When I think backup I think disaster recovery from a junior developer recursively deleting a bucket or perhaps an exploit or bug in your app that deletes stuff from s3.

To me, the best solution would be a script that just backs up one bucket to another region, one daily and one weekly so that if something terrible happens you can just switch regions. I don't have a setup like this, I've looked into just haven't gotten around to doing it cause it would take a bit of effort to do this which is why I wish there was some stock solution to use.

Originally posted on my blog: http://eladnava.com/backing-up-your-amazon-s3-buckets-to-ec2/

Sync Your S3 Bucket to an EC2 Server Periodically

This can be easily achieved by utilizing multiple command line utilities that make it possible to sync a remote S3 bucket to the local filesystem.

s3cmd
At first, s3cmd looked extremely promising. However, after trying it on my enormous S3 bucket -- it failed to scale, erroring out with a Segmentation fault. It did work fine on small buckets, though. Since it did not work for huge buckets, I set out to find an alternative.

s4cmd
The newer, multi-threaded alternative to s3cmd. Looked even more promising, however, I noticed that it kept re-downloading files that were already present on the local filesystem. That is not the kind of behavior I was expecting from the sync command. It should check whether the remote file already exists locally (hash/filesize checking would be neat) and skip it in the next sync run on the same target directory. I opened an issue (bloomreach/s4cmd/#46) to report this strange behavior. In the meantime, I set out to find another alternative.

awscli
And then I found awscli. This is Amazon's official command line interface for interacting with their different cloud services, S3 included.

AWSCLI

It provides a useful sync command that quickly and easily downloads the remote bucket files to your local filesystem.

$ aws s3 sync s3://your-bucket-name /home/ubuntu/s3/your-bucket-name/

Benefits:

  • Scalable - supports huge S3 buckets
  • Multi-threaded - syncs the files faster by utilizing multiple threads
  • Smart - only syncs new or updated files
  • Fast - thanks to its multi-threaded nature and smart sync algorithm

Accidental Deletion

Conveniently, the sync command won't delete files in the destination folder (local filesystem) if they are missing from the source (S3 bucket), and vice-versa. This is perfect for backing up S3 -- in case files get deleted from the bucket, re-syncing it will not delete them locally. And in case you delete a local file, it won't be deleted from the source bucket either.

Setting up awscli on Ubuntu 14.04 LTS

Let's begin by installing awscli. There are several ways to do this, however, I found it easiest to install it via apt-get.

$ sudo apt-get install awscli

Configuration

Next, we need to configure awscli with our Access Key ID & Secret Key, which you must obtain from IAM, by creating a user and attaching the AmazonS3ReadOnlyAccess policy. This will also prevent you or anyone who gains access to these credentials from deleting your S3 files. Make sure to enter your S3 region, such as us-east-1.

$ aws configure

aws configure

Preparation

Let's prepare the local S3 backup directory, preferably in /home/ubuntu/s3/{BUCKET_NAME}. Make sure to replace {BUCKET_NAME} with your actual bucket name.

$ mkdir -p /home/ubuntu/s3/{BUCKET_NAME}

Initial Sync

Let's go ahead and sync the bucket for the first time with the following command:

$ aws s3 sync s3://{BUCKET_NAME} /home/ubuntu/s3/{BUCKET_NAME}/

Assuming the bucket exists, the AWS credentials and region are correct, and the destination folder is valid, awscli will start to download the entire bucket to the local filesystem.

Depending on the size of the bucket and your Internet connection, it could take anywhere from a few seconds to hours. When that's done, we'll go ahead and set up an automatic cron job to keep the local copy of the bucket up to date.

Setting up a Cron Job

Go ahead and create a sync.sh file in /home/ubuntu/s3:

$ nano /home/ubuntu/s3/sync.sh

Copy and paste the following code into sync.sh:

#!/bin/sh


# Echo the current date and time


echo '-----------------------------'
date
echo '-----------------------------'
echo ''


# Echo script initialization
echo 'Syncing remote S3 bucket...'


# Actually run the sync command (replace {BUCKET_NAME} with your S3 bucket name)
/usr/bin/aws s3 sync s3://{BUCKET_NAME} /home/ubuntu/s3/{BUCKET_NAME}/


# Echo script completion
echo 'Sync complete'

Make sure to replace {BUCKET_NAME} with your S3 bucket name, twice throughout the script.

Pro tip: You should use /usr/bin/aws to link to the aws binary, as crontab executes commands in a limited shell environment and won't be able to find the executable on its own.

Next, make sure to chmod the script so it can be executed by crontab.

$ sudo chmod +x /home/ubuntu/s3/sync.sh

Let's try running the script to make sure it actually works:

$ /home/ubuntu/s3/sync.sh

The output should be similar to this:

sync.sh output

Next, let's edit the current user's crontab by executing the following command:

$ crontab -e

If this is your first time executing crontab -e, you'll need to select a preferred editor. I'd recommend selecting nano as it's the easiest for beginners to work with.

Sync Frequency

We need to tell crontab how often to run our script and where the script resides on the local filesystem by writing a command. The format for this command is as follows:

m h  dom mon dow   command

The following command configures crontab to run the sync.sh script every hour (specified via the minute:0 and hour:* parameters) and to have it pipe the script's output to a sync.log file in our s3 directory:

0 * * * * /home/ubuntu/s3/sync.sh > /home/ubuntu/s3/sync.log

You should add this line to the bottom of the crontab file you are editing. Then, go ahead and save the file to disk by pressing Ctrl + W and then Enter. You can then exit nano by pressing Ctrl + X. crontab will now run the sync task every hour.

Pro tip: You can verify that the hourly cron job is being executed successfully by inspecting /home/ubuntu/s3/sync.log, checking its contents for the execution date & time, and inspecting the logs to see which new files have been synced.

All set! Your S3 bucket will now get synced to your EC2 server every hour automatically, and you should be good to go. Do note that over time, as your S3 bucket gets bigger, you may have to increase your EC2 server's EBS volume size to accommodate new files. You can always increase your EBS volume size by following this guide.

How about using the readily available Cross Region Replication feature on the S3 buckets itself? Here are some useful articles about the feature

While this question was posted some time ago, I thought it important to mention MFA delete protection with the other solutions. The OP is trying to solve for the accidental deletion of data. Multi-factor authentication (MFA) manifests in two different scenarios here -

  1. Permanently deleting object versions - Enable MFA delete on the bucket's versioning.

  2. Accidentally deleting the bucket itself - Set up a bucket policy denying delete without MFA authentication.

Couple with cross-region replication and versioning to reduce the risk of data loss and improve the recovery scenarios.

Here is a blog post on this topic with more detail.

If, We have too much data. If you have already a bucket then the first time The sync will take too much time, In my case, I had 400GB. It took 3hr the first time. So I think we can make the replica is a good solution for S3 Bucket backup.

As this topic was created longtime ago and is still pretty actual, here some updated news:

External backup

Nothing changed, you still can use CLI, or any other tool to schedule a copy somewhere else (in or out of AWS).

There is tools to do that and previous answers were very specific

"Inside" backup

S3 now supports versionning for previous versions. It means that you can create and use a bucket normally and let S3 manage the lifecycle in the same bucket.

An example of possible config, if you delete a file, would be:

  1. File marked as deleted (still available but "invisible" to normal operations)
  2. File moved to Glacier after 7 days
  3. File removed after 30 days

You first need to activate versionning, and go to Lifecycle configuration. Pretty straight forward: previous versions only, and deletion is what you want. S3 Lifecyle panel

Then, define your policy. You can add as many actions as you want (but each transition cost you). You can't store in Glacier less than 30 days. S3 Lifecycle actions panel