使用boto3列出桶的内容

我如何用boto3看到S3中的桶里面有什么?(即做一个"ls")?

做以下事情:

import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')

返回:

s3.Bucket(name='some/path/')

我如何看到它的内容?

611401 次浏览

查看内容的一种方法是:

for my_bucket_object in my_bucket.objects.all():
print(my_bucket_object)

这类似于'ls',但它没有考虑到前缀文件夹约定,并将列出bucket中的对象。由读取器来过滤掉作为Key名称一部分的前缀。

在Python 2中:

from boto.s3.connection import S3Connection


conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
print(obj.key)

在Python 3中:

from boto3 import client


conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
print(key['Key'])

我假设您已经单独配置了身份验证。

import boto3
s3 = boto3.resource('s3')


my_bucket = s3.Bucket('bucket_name')


for file in my_bucket.objects.all():
print(file.key)

如果你想传递ACCESS和SECRET密钥(你不应该这样做,因为这是不安全的):

from boto3.session import Session


ACCESS_KEY='your_access_key'
SECRET_KEY='your_secret_key'


session = Session(aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
your_bucket = s3.Bucket('your_bucket')


for s3_file in your_bucket.objects.all():
print(s3_file.key)

一种更节俭的方法,而不是通过一个for循环来迭代,你也可以只打印原始对象,其中包含S3桶中的所有文件:

session = Session(aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket_name')


files_in_s3 = bucket.objects.all()
#you can print this iterable with print(list(files_in_s3))

我只是这样做的,包括身份验证方法:

s3_client = boto3.client(
's3',
aws_access_key_id='access_key',
aws_secret_access_key='access_key_secret',
config=boto3.session.Config(signature_version='s3v4'),
region_name='region'
)


response = s3_client.list_objects(Bucket='bucket_name', Prefix=key)
if ('Contents' in response):
# Object / key exists!
return True
else:
# Object / key DOES NOT exist!
return False

ObjectSummary:

有两个标识符附加到ObjectSummary:

  • bucket_name
  • 关键

boto3 S3: ObjectSummary

有关AWS S3文档中的对象键的更多信息:

对象键:

创建对象时,指定键名,该键名唯一地标识bucket中的对象。例如,在Amazon S3控制台中(请参阅AWS管理控制台),当突出显示存储桶时,将显示存储桶中的对象列表。这些名称是对象键。键的名称是一个Unicode字符序列,其UTF-8编码长度最多为1024字节。

Amazon S3数据模型是一个平面结构:您创建一个桶,桶存储对象。没有子桶或子文件夹的层次结构;但是,您可以像Amazon S3控制台那样使用键名前缀和分隔符推断逻辑层次结构。Amazon S3控制台支持文件夹的概念。假设你的桶(admin-created)有四个对象,它们的键值如下:

开发/ Projects1.xls

金融/ statement1.pdf

私人/ taxdocument.pdf

s3-dg.pdf

参考:

AWS S3: Object Keys

下面是一些示例代码,演示如何获取桶名和对象键。

例子:

import boto3
from pprint import pprint


def main():


def enumerate_s3():
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print("Name: {}".format(bucket.name))
print("Creation Date: {}".format(bucket.creation_date))
for object in bucket.objects.all():
print("Object: {}".format(object))
print("Object bucket_name: {}".format(object.bucket_name))
print("Object key: {}".format(object.key))


enumerate_s3()




if __name__ == '__main__':
main()

为了处理大型键列表(即当目录列表大于1000项时),我使用以下代码将多个列表中的键值(即文件名)累积起来(感谢上面的阿梅里奥的第一行)。代码是针对python3的:

    from boto3  import client
bucket_name = "my_bucket"
prefix      = "my_key/sub_key/lots_o_files"


s3_conn   = client('s3')  # type: BaseClient  ## again assumes boto.cfg setup, assume AWS S3
s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter = "/")


if 'Contents' not in s3_result:
#print(s3_result)
return []


file_list = []
for key in s3_result['Contents']:
file_list.append(key['Key'])
print(f"List count = {len(file_list)}")


while s3_result['IsTruncated']:
continuation_key = s3_result['NextContinuationToken']
s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter="/", ContinuationToken=continuation_key)
for key in s3_result['Contents']:
file_list.append(key['Key'])
print(f"List count = {len(file_list)}")
return file_list

我的s3 keys实用函数本质上是@Hephaestus的答案的优化版本:

import boto3




s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')




def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
prefix = prefix.lstrip(delimiter)
start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
for content in page.get('Contents', ()):
yield content['Key']

在我的测试(boto3 1.9.84)中,它比等效的(但更简单)代码要快得多:

import boto3




def keys(bucket_name, prefix='/', delimiter='/'):
prefix = prefix.lstrip(delimiter)
bucket = boto3.resource('s3').Bucket(bucket_name)
return (_.key for _ in bucket.objects.filter(Prefix=prefix))

作为S3保证UTF-8二进制排序结果start_after优化被添加到第一个函数。

#To print all filenames in a bucket
import boto3


s3 = boto3.client('s3')


def get_s3_keys(bucket):


"""Get a list of keys in an S3 bucket."""
resp = s3.list_objects_v2(Bucket=bucket)
for obj in resp['Contents']:
files = obj['Key']
return files


  

filename = get_s3_keys('your_bucket_name')


print(filename)


#To print all filenames in a certain directory in a bucket
import boto3


s3 = boto3.client('s3')


def get_s3_keys(bucket, prefix):


"""Get a list of keys in an S3 bucket."""
resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
for obj in resp['Contents']:
files = obj['Key']
print(files)
return files


  

filename = get_s3_keys('your_bucket_name', 'folder_name/sub_folder_name/')


print(filename)
< p >更新: 最简单的方法是使用awswrangler

import awswrangler as wr
wr.s3.list_objects('s3://bucket_name')

在上面的注释中对@Hephaeastus的代码进行了少许修改,编写了下面的方法来列出给定路径中的文件夹和对象(文件)。类似s3 ls命令。

from boto3 import session


def s3_ls(profile=None, bucket_name=None, folder_path=None):
folders=[]
files=[]
result=dict()
bucket_name = bucket_name
prefix= folder_path
session = boto3.Session(profile_name=profile)
s3_conn   = session.client('s3')
s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter = "/", Prefix=prefix)
if 'Contents' not in s3_result and 'CommonPrefixes' not in s3_result:
return []


if s3_result.get('CommonPrefixes'):
for folder in s3_result['CommonPrefixes']:
folders.append(folder.get('Prefix'))


if s3_result.get('Contents'):
for key in s3_result['Contents']:
files.append(key['Key'])


while s3_result['IsTruncated']:
continuation_key = s3_result['NextContinuationToken']
s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter="/", ContinuationToken=continuation_key, Prefix=prefix)
if s3_result.get('CommonPrefixes'):
for folder in s3_result['CommonPrefixes']:
folders.append(folder.get('Prefix'))
if s3_result.get('Contents'):
for key in s3_result['Contents']:
files.append(key['Key'])


if folders:
result['folders']=sorted(folders)
if files:
result['files']=sorted(files)
return result

这将列出给定路径下的所有对象/文件夹。Folder_path可以默认为None, method将列出桶根目录的即时内容。

这是解决方案

import boto3


s3=boto3.resource('s3')
BUCKET_NAME = 'Your S3 Bucket Name'
allFiles = s3.Bucket(BUCKET_NAME).objects.all()
for file in allFiles:
print(file.key)

也可以这样做:

csv_files = s3.list_objects_v2(s3_bucket_path)
for obj in csv_files['Contents']:
key = obj['Key']

所以你在boto3中要求等价的aws s3 ls。这将列出所有顶级文件夹和文件。这是我能得到的最接近的结果;它只列出所有顶层文件夹。这么简单的操作居然这么难。

import boto3


def s3_ls():
s3 = boto3.resource('s3')
bucket = s3.Bucket('example-bucket')
result = bucket.meta.client.list_objects(Bucket=bucket.name,
Delimiter='/')
for o in result.get('CommonPrefixes'):
print(o.get('Prefix'))

下面是一个简单的函数,它返回所有文件的文件名或具有特定类型的文件,如'json', 'jpg'。

def get_file_list_s3(bucket, prefix="", file_extension=None):
"""Return the list of all file paths (prefix + file name) with certain type or all
Parameters
----------
bucket: str
The name of the bucket. For example, if your bucket is "s3://my_bucket" then it should be "my_bucket"
prefix: str
The full path to the the 'folder' of the files (objects). For example, if your files are in
s3://my_bucket/recipes/deserts then it should be "recipes/deserts". Default : ""
file_extension: str
The type of the files. If you want all, just leave it None. If you only want "json" files then it
should be "json". Default: None
Return
------
file_names: list
The list of file names including the prefix
"""
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(bucket)
file_objs =  my_bucket.objects.filter(Prefix=prefix).all()
file_names = [file_obj.key for file_obj in file_objs if file_extension is not None and file_obj.key.split(".")[-1] == file_extension]
return file_names

我以前是这样做的:

import boto3
s3 = boto3.resource('s3')
bucket=s3.Bucket("bucket_name")
contents = [_.key for _ in bucket.objects.all() if "subfolders/ifany/" in _.key]

使用cloudpathlib

cloudpathlib提供了一个方便的包装器,这样你就可以使用简单的pathlib API与AWS S3(和Azure blob存储,GCS等)交互。你可以用pip install "cloudpathlib[s3]"安装。

pathlib一样,你可以使用globiterdir来列出目录的内容。

下面是一个带有公共AWS S3桶的示例,您可以复制并过去运行该桶。

from cloudpathlib import CloudPath


s3_path = CloudPath("s3://ladi/Images/FEMA_CAP/2020/70349")


# list items with glob
list(
s3_path.glob("*")
)[:3]
#> [ S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0001_5a63d42e-27c6-448a-84f1-bfc632125b8e.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0002_a89f1b79-786f-4dac-9dcc-609fb1a977b1.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0003_02c30af6-911e-4e01-8c24-7644da2b8672.jpg')]


# list items with iterdir
list(
s3_path.iterdir()
)[:3]
#> [ S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0001_5a63d42e-27c6-448a-84f1-bfc632125b8e.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0002_a89f1b79-786f-4dac-9dcc-609fb1a977b1.jpg'),
#>   S3Path('s3://ladi/Images/FEMA_CAP/2020/70349/DSC_0003_02c30af6-911e-4e01-8c24-7644da2b8672.jpg')]

创建于2021-05-21 20:38:47 PDT由reprexlite v0.4.2

import boto3
s3 = boto3.resource('s3')


## Bucket to use
my_bucket = s3.Bucket('city-bucket')


## List objects within a given prefix
for obj in my_bucket.objects.filter(Delimiter='/', Prefix='city/'):
print obj.key

输出:

city/pune.csv
city/goa.csv

从lambda函数运行aws cli命令也是一个不错的选择

import subprocess
import logging


logger = logging.getLogger()
logger.setLevel(logging.INFO)


def run_command(command):
command_list = command.split(' ')


try:
logger.info("Running shell command: \"{}\"".format(command))
result = subprocess.run(command_list, stdout=subprocess.PIPE);
logger.info("Command output:\n---\n{}\n---".format(result.stdout.decode('UTF-8')))
except Exception as e:
logger.error("Exception: {}".format(e))
return False


return True


def lambda_handler(event, context):
run_command('/opt/aws s3 ls s3://bucket-name')

我花了一整晚的时间在这个问题上因为我只想知道子文件夹下的文件数但它也返回了一个额外的文件在子文件夹本身的内容中,

在研究之后,我发现这是s3的工作方式,但我有 我在以下目录

中从红移中卸载数据的场景
s3://bucket_name/subfolder/<10 number of files>

当我用

paginator.paginate(Bucket=price_signal_bucket_name,Prefix=new_files_folder_path+"/")

它只会返回10个文件,但当我在s3桶上创建文件夹时,它也会返回子文件夹

结论

  1. 如果整个文件夹都上传到s3,那么列出only将返回前缀下的文件
  2. 但是如果文件夹是在s3桶本身创建的,那么使用boto3客户端列出它也将返回子文件夹和文件