人们如何处理Docker容器的持久存储?
我目前正在使用这种方法:构建映像,例如PostgreSQL,然后启动容器
docker run --volumes-from c0dbc34fd631 -d app_name/postgres
恕我直言,这有缺点,我不能(意外地)删除容器“c0dbc34fd631”。
另一个想法是将主机卷“-v”挂载到容器中,然而,容器中的用户id不一定与来自主机的用户id匹配,然后权限可能会被打乱。
注意:除了——volumes-from 'cryptic_id',你还可以使用——volumes-from my-data-container,其中my-data-container是你分配给仅数据容器的名称,例如docker run——name my-data-container…(见公认答案)
从Docker Compose 1.6开始,现在在Docker Compose中改进了对数据卷的支持。下面的合成文件将创建一个数据映像,该映像将在父容器重新启动(甚至删除)之间持续存在:
下面是博客公告:Compose 1.6:用于定义网络和卷的新Compose文件
这是一个合成文件的例子:
version: "2"
services:
db:
restart: on-failure:10
image: postgres:9.4
volumes:
- "db-data:/var/lib/postgresql/data"
web:
restart: on-failure:10
build: .
command: gunicorn mypythonapp.wsgi:application -b :8000 --reload
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
volumes:
db-data:
据我所知:这将创建一个数据卷容器(db_data),它将在重新启动之间持续存在。
如果你运行:docker volume ls,你会看到你的卷被列出:
local mypthonapp_db-data
...
您可以获得关于数据量的更多详细信息:
docker volume inspect mypthonapp_db-data
[
{
"Name": "mypthonapp_db-data",
"Driver": "local",
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mypthonapp_db-data/_data"
}
]
一些测试:
# Start the containers
docker-compose up -d
# .. input some data into the database
docker-compose run --rm web python manage.py migrate
docker-compose run --rm web python manage.py createsuperuser
...
# Stop and remove the containers:
docker-compose stop
docker-compose rm -f
# Start it back up again
docker-compose up -d
# Verify the data is still there
...
(it is)
# Stop and remove with the -v (volumes) tag:
docker-compose stop
docker=compose rm -f -v
# Up again ..
docker-compose up -d
# Check the data is still there:
...
(it is).
注:
您还可以在卷块中指定各种驱动程序。例如,你可以为db_data指定Flocker驱动程序:
卷:
db-data:
司机:群
随着Docker Swarm和Docker Compose之间的集成越来越完善(并且可能开始将Flocker集成到Docker生态系统中(我听说Docker已经收购了Flocker),我认为这种方法应该会变得越来越强大。
声明:这种方法很有前途,我在开发环境中成功地使用了它。我对在生产中使用它感到不安!
根据您的需要,管理持久数据有几个级别:
Store it on your host
Use the flag -v host-path:container-path to persist container directory data to a host directory.
Backups/restores happen by running a backup/restore container (such as tutumcloud/dockup) mounted to the same directory.
Create a data container and mount its volumes to your application container
Create a container that exports a data volume, use --volumes-from to mount that data into your application container.
Backup/restore the same as the above solution.
Use a Docker volume plugin that backs an external/third-party service
Docker volume plugins allow your datasource to come from anywhere - NFS, AWS (S3, EFS, and EBS)
Depending on the plugin/service, you can attach single or multiple containers to a single volume.
Depending on the service, backups/restores may be automated for you.
While this can be cumbersome to do manually, some orchestration solutions - such as Rancher - have it baked in and simple to use.
Convoy is the easiest solution for doing this manually.