我在一个计算资源共享的环境中工作,也就是说,我们有几台服务器机器,每台机器都配备了几个Nvidia Titan X gpu。

For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU.

TensorFlow的问题在于,默认情况下,它在启动时分配了全部可用的GPU内存。即使是一个小型的两层神经网络,我看到所有12 GB的GPU内存都用完了。

有没有一种方法让TensorFlow只分配,比如说,4 GB的GPU内存,如果我们知道这对一个给定的模型来说已经足够了?


当前回答

当你构造一个tf时,你可以设置GPU内存的分配比例。会话通过传递一个tf。GPUOptions作为可选配置参数的一部分:

# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

per_process_gpu_memory_fraction充当同一台机器上每个GPU上的进程将使用的GPU内存量的硬上限。目前,这个分数统一应用于同一台机器上的所有gpu;没有办法在每个gpu基础上设置这个。

其他回答

以下是《TensorFlow深度学习》一书的节选

In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.

1)允许增长:(更灵活)

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

第二个方法是per_process_gpu_memory_fraction选项,它决定每个可见GPU应该分配的内存总量的百分比。注意:不需要释放内存,这样做甚至会恶化内存碎片。

2)分配固定内存:

每个GPU只分配40%的内存:

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

注意: 不过,只有当你真的想绑定TensorFlow进程上可用的GPU内存数量时,这才有用。

Tensorflow 2.0 Beta和(可能)更高版本

API再次改变。现在可以在以下地方找到它:

tf.config.experimental.set_memory_growth(
    device,
    enable
)

别名:

tf.compat.v1.config.experimental.set_memory_growth tf.compat.v2.config.experimental.set_memory_growth

引用:

https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth

参见: Tensorflow—使用GPU: https://www.tensorflow.org/guide/gpu

对于Tensorflow 2.0 Alpha,请参见:这个答案

以上答案都是指在TensorFlow 1中设置一定的内存。或者在TensorFlow 2.X中允许内存增长。

方法tf.config.experimental。Set_memory_growth确实适用于在分配/预处理期间允许动态增长。然而,人们可能喜欢从一开始就分配一个特定的GPU内存上限。

分配特定GPU内存的逻辑也是为了防止在训练期间使用OOM内存。例如,如果一个人在打开占用视频内存的Chrome选项卡/任何其他视频消耗过程时进行训练,tf.config.experimental. js将被调用。set_memory_growth(gpu, True)可能导致抛出OOM错误,因此在某些情况下需要从一开始就分配更多的内存。

TensorFlow 2中为每个GPU分配内存的推荐和正确方法。X是通过以下方式完成的:

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  # Restrict TensorFlow to only allocate 1GB of memory on the first GPU
  try:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],
        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)]

你可以使用

TF_FORCE_GPU_ALLOW_GROWTH=true

在环境变量中。

在tensorflow代码中:

bool GPUBFCAllocator::GetAllowGrowthValue(const GPUOptions& gpu_options) {
  const char* force_allow_growth_string =
      std::getenv("TF_FORCE_GPU_ALLOW_GROWTH");
  if (force_allow_growth_string == nullptr) {
    return gpu_options.allow_growth();
}
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

https://github.com/tensorflow/tensorflow/issues/1578