我正在Amazon EC2上运行docker-container。目前我已经将AWS凭证添加到Dockerfile。你能告诉我最好的方法吗?
当前回答
即使我的凭证是由aws-okta或saml2aws设置的,以下一行程序也适用于我:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
请注意,对于高级用例,您可能需要允许rw(读写)权限,因此在-v$HOME/.aws:/root/.aws:ro中挂载.aws卷时忽略ro(只读)限制
其他回答
最好的方法是使用IAM Role,完全不处理凭据。(见http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
凭据可以从http://169.254.169.254.....检索由于这是一个私有ip地址,因此只能从EC2实例访问它。
所有现代AWS客户端库都“知道”如何从那里获取、刷新和使用凭据。所以在大多数情况下你甚至不需要知道它。只需使用正确的IAM角色运行ec2,就可以了。
作为一个选项,你可以在运行时传递它们作为环境变量(即docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)
您可以通过在终端上运行printenv来访问这些环境变量。
如果有人在遵循已接受答案中提到的说明后仍然面临同样的问题,那么请确保您没有传递来自两个不同来源的环境变量。在我的情况下,我通过文件和参数传递环境变量到docker运行,这导致作为参数传递的变量显示没有影响。
所以下面的命令对我没用:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
将aws凭证移动到提到的env。列表文件帮助。
即使我的凭证是由aws-okta或saml2aws设置的,以下一行程序也适用于我:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
请注意,对于高级用例,您可能需要允许rw(读写)权限,因此在-v$HOME/.aws:/root/.aws:ro中挂载.aws卷时忽略ro(只读)限制
另一种方法是将密钥从主机传递到docker容器。您可以在docker-compose文件中添加以下代码行。
services:
web:
build: .
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
自从这个问题被提出以来,Docker已经发生了很大的变化,所以这里有一个更新的答案。
首先,特别是在已经在云中运行的容器上使用AWS凭证时,使用Vor建议的IAM角色是一个非常好的选择。如果你能做到,那么在他的答案上再加一个+ 1,然后跳过剩下的部分。
一旦你开始在云之外运行东西,或者有了不同类型的秘密,我建议不要在两个关键的地方存储秘密:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container. In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
那么Docker容器中的秘密还有什么其他选择呢?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
选项B:同样是在构建期间,如果你可以使用在18.09中发布的BuildKit,目前有一些实验性的特性允许在单个RUN行中注入秘密作为卷挂载。该挂载不会被写入映像层,因此您可以在构建期间访问秘密,而不必担心它会被推送到公共注册中心服务器。生成的Dockerfile如下所示:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
您可以使用18.09或更新版本的命令来构建它,例如:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
对于docker运行,这看起来像:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
或者对于一个撰写文件,你会有:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
使用docker swarm init为单个节点开启集群模式,然后按照说明添加其他节点。你可以使用docker secret create aws_creds $HOME/.aws/credentials在外部创建secret。然后使用docker stack deploy -c docker-compose来部署compose文件。yml stack_name。
我经常使用来自https://github.com/sudo-bmitch/docker-config-update的脚本来控制我的秘密
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
推荐文章
- 亚马逊ECS和亚马逊EC2有什么区别?
- 如何从DockerFile运行docker实例?
- 我是否试图连接到启用TLS的没有TLS的守护进程?
- Docker -一种访问主机USB或串行设备的方法?
- 我如何知道我在S3桶中存储了多少对象?
- Docker:无效引用格式
- S3 Bucket操作不应用于任何资源
- 将AWS凭证传递给Docker容器的最佳方法是什么?
- 如何在docker-compose中设置主机名?
- 为什么Docker容器映像这么大?
- 当权限为S3时,AccessDenied for ListObjects for S3 bucket:*
- 电子邮件地址未验证(AWS SES)
- 如何在docker撰写版本3中指定内存和CPU限制
- 使用Boto3将S3对象作为字符串打开
- Docker和安全密码