如何保存 S3对象的文件使用 boto3

我正在尝试用 AWS 的新 Boto3客户端做一个“ hello world”。

我的用例相当简单: 从 S3获取对象并将其保存到文件中。

在 boto 2.X 中,我会这样做:

import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')

在波多3号。我不能找到一个干净的方法来做同样的事情,所以我手动迭代“ Streaming”对象:

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
chunk = key['Body'].read(1024*8)
while chunk:
f.write(chunk)
chunk = key['Body'].read(1024*8)

或者

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
for chunk in iter(lambda: key['Body'].read(4096), b''):
f.write(chunk)

它工作得很好。我想知道是否有任何“本地”boto3函数可以完成同样的任务?

228984 次浏览

Boto3最近进行了一个定制,这有助于解决这个问题(还有其他问题)。它目前暴露在低级别的 S3客户机上,可以这样使用:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')


# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')


# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())

这些函数将自动处理大文件的读/写文件以及并行执行多部分上传。

请注意,s3_client.download_file不会创建目录,它可以创建为 pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True)

Boto3现在有一个比客户端更好的界面:

resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)

这本身并没有比 client在公认的答案中好很多(虽然文档说它在失败时重试上传和下载的工作做得更好) ,但是考虑到资源通常更符合人体工程学(例如,s3 水桶对象资源比客户端方法更好) ,这确实允许你停留在资源层而不必下降。

Resources 通常可以用与客户端相同的方式创建,他们采用所有或大部分相同的参数,只是将它们转发给他们的内部客户端。

对于那些希望像 boto2方法一样模拟 set_contents_from_string的人,您可以尝试一下

import boto3
from cStringIO import StringIO


s3c = boto3.client('s3')
contents = 'My string to save to S3 object'
target_bucket = 'hello-world.by.vor'
target_file = 'data/hello.txt'
fake_handle = StringIO(contents)


# notice if you do fake_handle.read() it reads like a file handle
s3c.put_object(Bucket=target_bucket, Key=target_file, Body=fake_handle.read())

对于 Python 3:

在 python3中同时使用 StringIO 和 cStringIO 不见了。使用 StringIO导入,如下所示:

from io import StringIO

支持两个版本:

try:
from StringIO import StringIO
except ImportError:
from io import StringIO
# Preface: File is json with contents: {'name': 'Android', 'status': 'ERROR'}


import boto3
import io


s3 = boto3.resource('s3')


obj = s3.Object('my-bucket', 'key-to-file.json')
data = io.BytesIO()
obj.download_fileobj(data)


# object is now a bytes string, Converting it to a dict:
new_dict = json.loads(data.getvalue().decode("utf-8"))


print(new_dict['status'])
# Should print "Error"

注意: 我假设您已经单独配置了身份验证。下面的代码是从 S3 bucket 下载单个对象。

import boto3


#initiate s3 client
s3 = boto3.resource('s3')


#Download object to the file
s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')

当您想要读取一个与默认配置不同的文件时,可以直接使用 mpu.aws.s3_download(s3path, destination)或复制粘贴的代码:

def s3_download(source, destination,
exists_strategy='raise',
profile_name=None):
"""
Copy a file from an S3 source to a local destination.


Parameters
----------
source : str
Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
destination : str
exists_strategy : {'raise', 'replace', 'abort'}
What is done when the destination already exists?
profile_name : str, optional
AWS profile


Raises
------
botocore.exceptions.NoCredentialsError
Botocore is not able to find your credentials. Either specify
profile_name or add the environment variables AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
See https://boto3.readthedocs.io/en/latest/guide/configuration.html
"""
exists_strategies = ['raise', 'replace', 'abort']
if exists_strategy not in exists_strategies:
raise ValueError('exists_strategy \'{}\' is not in {}'
.format(exists_strategy, exists_strategies))
session = boto3.Session(profile_name=profile_name)
s3 = session.resource('s3')
bucket_name, key = _s3_path_split(source)
if os.path.isfile(destination):
if exists_strategy is 'raise':
raise RuntimeError('File \'{}\' already exists.'
.format(destination))
elif exists_strategy is 'abort':
return
s3.Bucket(bucket_name).download_file(key, destination)


from collections import namedtuple


S3Path = namedtuple("S3Path", ["bucket_name", "key"])




def _s3_path_split(s3_path):
"""
Split an S3 path into bucket and key.


Parameters
----------
s3_path : str


Returns
-------
splitted : (str, str)
(bucket, key)


Examples
--------
>>> _s3_path_split('s3://my-bucket/foo/bar.jpg')
S3Path(bucket_name='my-bucket', key='foo/bar.jpg')
"""
if not s3_path.startswith("s3://"):
raise ValueError(
"s3_path is expected to start with 's3://', " "but was {}"
.format(s3_path)
)
bucket_key = s3_path[len("s3://"):]
bucket_name, key = bucket_key.split("/", 1)
return S3Path(bucket_name, key)

如果您希望下载文件的一个版本,则需要使用 get_object

import boto3


bucket = 'bucketName'
prefix = 'path/to/file/'
filename = 'fileName.ext'


s3c = boto3.client('s3')
s3r = boto3.resource('s3')


if __name__ == '__main__':
for version in s3r.Bucket(bucket).object_versions.filter(Prefix=prefix + filename):
file = version.get()
version_id = file.get('VersionId')
obj = s3c.get_object(
Bucket=bucket,
Key=prefix + filename,
VersionId=version_id,
)
with open(f"{filename}.{version_id}", 'wb') as f:
for chunk in obj['Body'].iter_chunks(chunk_size=4096):
f.write(chunk)

档号: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/response.html