我正在Amazon EC2上运行docker-container。目前我已经将AWS凭证添加到Dockerfile。你能告诉我最好的方法吗?
当前回答
如果有人在遵循已接受答案中提到的说明后仍然面临同样的问题,那么请确保您没有传递来自两个不同来源的环境变量。在我的情况下,我通过文件和参数传递环境变量到docker运行,这导致作为参数传递的变量显示没有影响。
所以下面的命令对我没用:
docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1
将aws凭证移动到提到的env。列表文件帮助。
其他回答
自从这个问题被提出以来,Docker已经发生了很大的变化,所以这里有一个更新的答案。
首先,特别是在已经在云中运行的容器上使用AWS凭证时,使用Vor建议的IAM角色是一个非常好的选择。如果你能做到,那么在他的答案上再加一个+ 1,然后跳过剩下的部分。
一旦你开始在云之外运行东西,或者有了不同类型的秘密,我建议不要在两个关键的地方存储秘密:
Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container. In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.
那么Docker容器中的秘密还有什么其他选择呢?
Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.
选项B:同样是在构建期间,如果你可以使用在18.09中发布的BuildKit,目前有一些实验性的特性允许在单个RUN行中注入秘密作为卷挂载。该挂载不会被写入映像层,因此您可以在构建期间访问秘密,而不必担心它会被推送到公共注册中心服务器。生成的Dockerfile如下所示:
# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
您可以使用18.09或更新版本的命令来构建它,例如:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)
对于docker运行,这看起来像:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
或者对于一个撰写文件,你会有:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
使用docker swarm init为单个节点开启集群模式,然后按照说明添加其他节点。你可以使用docker secret create aws_creds $HOME/.aws/credentials在外部创建secret。然后使用docker stack deploy -c docker-compose来部署compose文件。yml stack_name。
我经常使用来自https://github.com/sudo-bmitch/docker-config-update的脚本来控制我的秘密
Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html
你可以创建~/aws_env_creds包含:
touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds
添加这些值(替换你的密钥):
AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C
按“esc”保存文件。
运行并测试容器:
my_service:
build: .
image: my_image
env_file:
- ~/aws_env_creds
根据之前的一些回答,我建立了自己的答案如下。 我的项目结构:
├── Dockerfile
├── code
│ └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt
我的docker-compose。yml文件:
version: "3"
services:
app:
build:
context: .
volumes:
- ./credentials:/root/.aws/credentials
- ./code:/home/app
我的Docker文件:
FROM python:3.8-alpine
RUN pip3 --no-cache-dir install --upgrade awscli
RUN mkdir /app
WORKDIR /home/app
CMD python main.py
即使我的凭证是由aws-okta或saml2aws设置的,以下一行程序也适用于我:
$ docker run -v$HOME/.aws:/root/.aws:ro \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli s3 ls
请注意,对于高级用例,您可能需要允许rw(读写)权限,因此在-v$HOME/.aws:/root/.aws:ro中挂载.aws卷时忽略ro(只读)限制
对于PHP apache docker,下面的命令是有效的
docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache
推荐文章
- 如何从docker更改默认docker注册表。IO到我的私人注册表?
- 我如何获得亚马逊的AWS_ACCESS_KEY_ID ?
- 如何使所有对象在AWS S3桶公共默认?
- Docker- compose无法连接到Docker Daemon
- 单个命令停止和删除docker容器
- 为什么我应该使用亚马逊Kinesis而不是SNS-SQS?
- 使用GPU从docker容器?
- 如何使用本地映像作为dockerfile的基本映像?
- 谁能解释一下docker.sock
- 多重from是什么意思
- 通过映像名称停止Docker容器- Ubuntu
- 如果dockerfile的名称不是dockerfile,我如何构建一个dockerfile ?
- 如何重命名AWS S3 Bucket
- 守护进程的错误响应:已被集装箱使用"
- AWS ECS中的任务和服务之间有什么区别?