我如何能看到什么是在S3桶与boto3?(例如,写一个“ls”)?
做以下事情:
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
返回:
s3.Bucket(name='some/path/')
我如何看到它的内容?
我如何能看到什么是在S3桶与boto3?(例如,写一个“ls”)?
做以下事情:
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')
返回:
s3.Bucket(name='some/path/')
我如何看到它的内容?
当前回答
这类似于'ls',但它没有考虑到前缀文件夹约定,并将列出bucket中的对象。由读取器来过滤掉作为Key名称一部分的前缀。
在Python 2中:
from boto.s3.connection import S3Connection
conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
print(obj.key)
在Python 3中:
from boto3 import client
conn = client('s3') # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
print(key['Key'])
其他回答
我假设您已经单独配置了身份验证。
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('bucket_name')
for file in my_bucket.objects.all():
print(file.key)
#To print all filenames in a bucket
import boto3
s3 = boto3.client('s3')
def get_s3_keys(bucket):
"""Get a list of keys in an S3 bucket."""
resp = s3.list_objects_v2(Bucket=bucket)
for obj in resp['Contents']:
files = obj['Key']
return files
filename = get_s3_keys('your_bucket_name')
print(filename)
#To print all filenames in a certain directory in a bucket
import boto3
s3 = boto3.client('s3')
def get_s3_keys(bucket, prefix):
"""Get a list of keys in an S3 bucket."""
resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
for obj in resp['Contents']:
files = obj['Key']
print(files)
return files
filename = get_s3_keys('your_bucket_name', 'folder_name/sub_folder_name/')
print(filename)
更新: 最简单的方法是使用awswrangler
import awswrangler as wr
wr.s3.list_objects('s3://bucket_name')
如果你想传递ACCESS和SECRET密钥(你不应该这样做,因为这是不安全的):
from boto3.session import Session
ACCESS_KEY='your_access_key'
SECRET_KEY='your_secret_key'
session = Session(aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
your_bucket = s3.Bucket('your_bucket')
for s3_file in your_bucket.objects.all():
print(s3_file.key)
我的s3键实用函数本质上是@Hephaestus的答案的优化版本:
import boto3
s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')
def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
prefix = prefix.lstrip(delimiter)
start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
for content in page.get('Contents', ()):
yield content['Key']
在我的测试(boto3 1.9.84)中,它比等效的(但更简单)代码要快得多:
import boto3
def keys(bucket_name, prefix='/', delimiter='/'):
prefix = prefix.lstrip(delimiter)
bucket = boto3.resource('s3').Bucket(bucket_name)
return (_.key for _ in bucket.objects.filter(Prefix=prefix))
由于S3保证UTF-8二进制排序结果,因此在第一个函数中添加了start_after优化。
我以前是这样做的:
import boto3
s3 = boto3.resource('s3')
bucket=s3.Bucket("bucket_name")
contents = [_.key for _ in bucket.objects.all() if "subfolders/ifany/" in _.key]