Amazon S3中是否有重命名文件和文件夹的功能?欢迎提出相关建议。
当前回答
您可以使用AWS CLI或s3cmd命令重命名AWS S3桶中的文件和文件夹。
使用S3cmd,使用以下语法重命名文件夹,
s3cmd --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
使用AWS CLI,使用以下语法重命名文件夹,
aws s3 --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
其他回答
有一种软件可以使用s3桶执行不同类型的操作。
软件名称:S3浏览器
S3浏览器是Amazon S3和Amazon CloudFront的免费Windows客户端。Amazon S3提供了一个简单的web服务接口,可用于在任何时间、从web上的任何地方存储和检索任意数量的数据。Amazon CloudFront是一个内容分发网络(CDN)。它可用于使用边缘位置的全球网络交付您的文件。
如果只有一次,那么你可以使用命令行执行这些操作:
(1)重命名同一桶内的文件夹:
s3cmd --access_key={access_key} --secret_key={secret_key} mv s3://bucket/folder1/* s3://bucket/folder2/
(2)重命名Bucket:
s3cmd --access_key={access_key} --secret_key={secret_key} mv s3://bucket1/folder/* s3://bucket2/folder/
在那里,
{access_key} = s3客户端的有效访问密钥
{secret_key} = s3客户端的有效密匙
它工作得很好,没有任何问题。
谢谢
重命名所有*.csv. csv文件。将<<桶>>/landing dir中的Err文件转换为带有s3cmd的*.csv文件
export aws_profile='foo-bar-aws-profile'
while read -r f ; do tgt_fle=$(echo $f|perl -ne 's/^(.*).csv.err/$1.csv/g;print'); \
echo s3cmd -c ~/.aws/s3cmd/$aws_profile.s3cfg mv $f $tgt_fle; \
done < <(s3cmd -r -c ~/.aws/s3cmd/$aws_profile.s3cfg ls --acl-public --guess-mime-type \
s3://$bucket | grep -i landing | grep csv.err | cut -d" " -f5)
s3中的文件夹结构有很多“问题”,似乎存储是扁平的。
我有一个Django项目,我需要重命名文件夹,但仍然保持目录结构不变,这意味着空文件夹也需要复制并存储在重命名的目录中。
aws cli很棒,但cp或sync或mv都没有将空文件夹(即以'/'结尾的文件)复制到新的文件夹位置,所以我使用boto3和aws cli的混合来完成任务。
或多或少我找到重命名目录中的所有文件夹,然后使用boto3将它们放在新位置,然后我用aws cli对数据进行cp,最后将其删除。
import threading
import os
from django.conf import settings
from django.contrib import messages
from django.core.files.storage import default_storage
from django.shortcuts import redirect
from django.urls import reverse
def rename_folder(request, client_url):
"""
:param request:
:param client_url:
:return:
"""
current_property = request.session.get('property')
if request.POST:
# name the change
new_name = request.POST['name']
# old full path with www.[].com?
old_path = request.POST['old_path']
# remove the query string
old_path = ''.join(old_path.split('?')[0])
# remove the .com prefix item so we have the path in the storage
old_path = ''.join(old_path.split('.com/')[-1])
# remove empty values, this will happen at end due to these being folders
old_path_list = [x for x in old_path.split('/') if x != '']
# remove the last folder element with split()
base_path = '/'.join(old_path_list[:-1])
# # now build the new path
new_path = base_path + f'/{new_name}/'
# remove empty variables
# print(old_path_list[:-1], old_path.split('/'), old_path, base_path, new_path)
endpoint = settings.AWS_S3_ENDPOINT_URL
# # recursively add the files
copy_command = f"aws s3 --endpoint={endpoint} cp s3://{old_path} s3://{new_path} --recursive"
remove_command = f"aws s3 --endpoint={endpoint} rm s3://{old_path} --recursive"
# get_creds() is nothing special it simply returns the elements needed via boto3
client, resource, bucket, resource_bucket = get_creds()
path_viewing = f'{"/".join(old_path.split("/")[1:])}'
directory_content = default_storage.listdir(path_viewing)
# loop over folders and add them by default, aws cli does not copy empty ones
# so this is used to accommodate
folders, files = directory_content
for folder in folders:
new_key = new_path+folder+'/'
# we must remove bucket name for this to work
new_key = new_key.split(f"{bucket}/")[-1]
# push this to new thread
threading.Thread(target=put_object, args=(client, bucket, new_key,)).start()
print(f'{new_key} added')
# # run command, which will copy all data
os.system(copy_command)
print('Copy Done...')
os.system(remove_command)
print('Remove Done...')
# print(bucket)
print(f'Folder renamed.')
messages.success(request, f'Folder Renamed to: {new_name}')
return redirect(request.META.get('HTTP_REFERER', f"{reverse('home', args=[client_url])}"))
正如Naaz所回答的,直接重命名s3是不可能的。
我附上了一个代码片段,它将复制所有的内容
代码正在工作,只需添加您的aws访问密钥和秘密密钥
这是我在代码中所做的
->复制源文件夹内容(嵌套子和文件夹)并粘贴到目标文件夹
->拷贝完成后,删除源文件夹
package com.bighalf.doc.amazon;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.util.List;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3ObjectSummary;
public class Test {
public static boolean renameAwsFolder(String bucketName,String keyName,String newName) {
boolean result = false;
try {
AmazonS3 s3client = getAmazonS3ClientObject();
List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
//some meta data to create empty folders start
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
//some meta data to create empty folders end
//final location is the locaiton where the child folder contents of the existing folder should go
String finalLocation = keyName.substring(0,keyName.lastIndexOf('/')+1)+newName;
for (S3ObjectSummary file : fileList) {
String key = file.getKey();
//updating child folder location with the newlocation
String destinationKeyName = key.replace(keyName,finalLocation);
if(key.charAt(key.length()-1)=='/'){
//if name ends with suffix (/) means its a folders
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, destinationKeyName, emptyContent, metadata);
s3client.putObject(putObjectRequest);
}else{
//if name doesnot ends with suffix (/) means its a file
CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName,
file.getKey(), bucketName, destinationKeyName);
s3client.copyObject(copyObjRequest);
}
}
boolean isFodlerDeleted = deleteFolderFromAws(bucketName, keyName);
return isFodlerDeleted;
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
public static boolean deleteFolderFromAws(String bucketName, String keyName) {
boolean result = false;
try {
AmazonS3 s3client = getAmazonS3ClientObject();
//deleting folder children
List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
for (S3ObjectSummary file : fileList) {
s3client.deleteObject(bucketName, file.getKey());
}
//deleting actual passed folder
s3client.deleteObject(bucketName, keyName);
result = true;
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
public static void main(String[] args) {
intializeAmazonObjects();
boolean result = renameAwsFolder(bucketName, keyName, newName);
System.out.println(result);
}
private static AWSCredentials credentials = null;
private static AmazonS3 amazonS3Client = null;
private static final String ACCESS_KEY = "";
private static final String SECRET_ACCESS_KEY = "";
private static final String bucketName = "";
private static final String keyName = "";
//renaming folder c to x from key name
private static final String newName = "";
public static void intializeAmazonObjects() {
credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_ACCESS_KEY);
amazonS3Client = new AmazonS3Client(credentials);
}
public static AmazonS3 getAmazonS3ClientObject() {
return amazonS3Client;
}
}
您可以使用AWS CLI或s3cmd命令重命名AWS S3桶中的文件和文件夹。
使用S3cmd,使用以下语法重命名文件夹,
s3cmd --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
使用AWS CLI,使用以下语法重命名文件夹,
aws s3 --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
推荐文章
- 警告:未受保护的私钥文件!当尝试SSH到Amazon EC2实例时
- 使用boto3连接CloudFront时,如何选择AWS配置文件
- 在亚马逊云服务器上设置FTP
- 无法将图像推送到Amazon ECR -由于“没有基本的身份验证凭据”而失败
- 如何测试AWS命令行工具的凭据
- 将Keypair添加到现有的EC2实例中
- AWS S3:您试图访问的桶必须使用指定的端点寻址
- 你会因为EC2上的“停止”实例而被收费吗?
- 下载一个已经上传的Lambda函数
- S3 - Access-Control-Allow-Origin头
- 何时使用Amazon Cloudfront或S3
- 如何处理错误与boto3?
- 什么数据存储在亚马逊EC2实例的临时存储?
- boto3 client NoRegionError:只能在某些时候指定区域错误
- AWS ssh访问“权限被拒绝(publickey)”问题