我正在Amazon EC2上运行docker-container。目前我已经将AWS凭证添加到Dockerfile。你能告诉我最好的方法吗?


最好的方法是使用IAM Role,完全不处理凭据。(见http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)

凭据可以从http://169.254.169.254.....检索由于这是一个私有ip地址,因此只能从EC2实例访问它。

所有现代AWS客户端库都“知道”如何从那里获取、刷新和使用凭据。所以在大多数情况下你甚至不需要知道它。只需使用正确的IAM角色运行ec2,就可以了。

作为一个选项,你可以在运行时传递它们作为环境变量(即docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)

您可以通过在终端上运行printenv来访问这些环境变量。


另一种方法是将密钥从主机传递到docker容器。您可以在docker-compose文件中添加以下代码行。

services:
  web:
    build: .
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}

还有一种方法是在docker-compose.yaml中创建临时只读卷。AWS CLI和SDK(如boto3或AWS SDK for Java等)正在~/中寻找默认配置文件。aws /凭证文件。

如果要使用其他配置文件,在执行docker-compose命令之前,还需要导出AWS_PROFILE变量。

出口AWS_PROFILE = some_other_profile_name

version: '3'

services:
  service-name:
    image: docker-image-name:latest
    environment:
      - AWS_PROFILE=${AWS_PROFILE}
    volumes:
      - ~/.aws/:/root/.aws:ro

在这个例子中,我在docker上使用root用户。如果您正在使用其他用户,只需更改/root/。Aws到用户主目录。

:ro -只读docker卷

当您在~/中有多个概要文件时,这是非常有用的。aws/凭证文件,您也在使用MFA。如果你想在ECS上部署docker-container之前对它进行本地测试(在ECS上你有IAM角色,但在本地你没有),这也是有帮助的。


自从这个问题被提出以来,Docker已经发生了很大的变化,所以这里有一个更新的答案。

首先,特别是在已经在云中运行的容器上使用AWS凭证时,使用Vor建议的IAM角色是一个非常好的选择。如果你能做到,那么在他的答案上再加一个+ 1,然后跳过剩下的部分。


一旦你开始在云之外运行东西,或者有了不同类型的秘密,我建议不要在两个关键的地方存储秘密:

Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container. In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.


那么Docker容器中的秘密还有什么其他选择呢?

Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.

选项B:同样是在构建期间,如果你可以使用在18.09中发布的BuildKit,目前有一些实验性的特性允许在单个RUN行中注入秘密作为卷挂载。该挂载不会被写入映像层,因此您可以在构建期间访问秘密,而不必担心它会被推送到公共注册中心服务器。生成的Dockerfile如下所示:

# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...

您可以使用18.09或更新版本的命令来构建它,例如:

DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .

Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it's no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don't trust users with root on the host, then don't give them docker API access.)

对于docker运行,这看起来像:

docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image

或者对于一个撰写文件,你会有:

version: '3'
services:
  app:
    image: your_image
    volumes:
    - $HOME/.aws/credentials:/home/app/.aws/credentials:ro

Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that's better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:

version: '3.7'

secrets:
  aws_creds:
    external: true

services:
  app:
    image: your_image
    secrets:
    - source: aws_creds
      target: /home/user/.aws/credentials
      uid: '1000'
      gid: '1000'
      mode: 0700

使用docker swarm init为单个节点开启集群模式,然后按照说明添加其他节点。你可以使用docker secret create aws_creds $HOME/.aws/credentials在外部创建secret。然后使用docker stack deploy -c docker-compose来部署compose文件。yml stack_name。

我经常使用来自https://github.com/sudo-bmitch/docker-config-update的脚本来控制我的秘密

Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at https://www.vaultproject.io/docs/secrets/aws/index.html


你可以创建~/aws_env_creds包含:

touch ~/aws_env_creds
chmod 777 ~/aws_env_creds
vi ~/aws_env_creds

添加这些值(替换你的密钥):

AWS_ACCESS_KEY_ID=AK_FAKE_KEY_88RD3PNY
AWS_SECRET_ACCESS_KEY=BividQsWW_FAKE_KEY_MuB5VAAsQNJtSxQQyDY2C

按“esc”保存文件。

运行并测试容器:

 my_service:
      build: .
      image: my_image
      env_file:
        - ~/aws_env_creds

即使我的凭证是由aws-okta或saml2aws设置的,以下一行程序也适用于我:

$ docker run -v$HOME/.aws:/root/.aws:ro \
            -e AWS_ACCESS_KEY_ID \
            -e AWS_CA_BUNDLE \
            -e AWS_CLI_FILE_ENCODING \
            -e AWS_CONFIG_FILE \
            -e AWS_DEFAULT_OUTPUT \
            -e AWS_DEFAULT_REGION \
            -e AWS_PAGER \
            -e AWS_PROFILE \
            -e AWS_ROLE_SESSION_NAME \
            -e AWS_SECRET_ACCESS_KEY \
            -e AWS_SESSION_TOKEN \
            -e AWS_SHARED_CREDENTIALS_FILE \
            -e AWS_STS_REGIONAL_ENDPOINTS \
            amazon/aws-cli s3 ls 

请注意,对于高级用例,您可能需要允许rw(读写)权限,因此在-v$HOME/.aws:/root/.aws:ro中挂载.aws卷时忽略ro(只读)限制


如果有人在遵循已接受答案中提到的说明后仍然面临同样的问题,那么请确保您没有传递来自两个不同来源的环境变量。在我的情况下,我通过文件和参数传递环境变量到docker运行,这导致作为参数传递的变量显示没有影响。

所以下面的命令对我没用:

docker run --env-file ./env.list -e AWS_ACCESS_KEY_ID=ABCD -e AWS_SECRET_ACCESS_KEY=PQRST IMAGE_NAME:v1.0.1

将aws凭证移动到提到的env。列表文件帮助。


卷挂载在这个线程中被注意到,但是从docker-compose v3.2 +开始,你可以绑定挂载。

例如,如果你在项目的根目录下有一个名为.aws_creds的文件:

在你的compose文件服务中,对卷执行以下操作:

volumes:
     # normal volume mount, already shown in thread
  - ./.aws_creds:/root/.aws/credentials
    # way 2, note this requires docker-compose v 3.2+
  - type: bind                         
    source: .aws_creds              # from local
    target: /root/.aws/credentials  # to the container location

使用这个想法,你可以在docker-hub上公开存储你的docker映像,因为你的aws凭证不会在映像中…要使它们相关联,你必须在容器启动的地方有正确的本地目录结构(即从Git中提取)


根据之前的一些回答,我建立了自己的答案如下。 我的项目结构:

├── Dockerfile
├── code
│   └── main.py
├── credentials
├── docker-compose.yml
└── requirements.txt

我的docker-compose。yml文件:

version: "3"

services:
  app:
    build:
      context: .
    volumes:
      - ./credentials:/root/.aws/credentials
      - ./code:/home/app

我的Docker文件:

FROM python:3.8-alpine

RUN pip3 --no-cache-dir install --upgrade awscli

RUN mkdir /app
WORKDIR /home/app

CMD python main.py

对于PHP apache docker,下面的命令是有效的

docker run --rm -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html -v ~/.aws:/.aws --env AWS_PROFILE=mfa php:7.2-apache