我在一个计算资源共享的环境中工作,也就是说,我们有几台服务器机器,每台机器都配备了几个Nvidia Titan X gpu。

For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU.

TensorFlow的问题在于,默认情况下,它在启动时分配了全部可用的GPU内存。即使是一个小型的两层神经网络,我看到所有12 GB的GPU内存都用完了。

有没有一种方法让TensorFlow只分配,比如说,4 GB的GPU内存,如果我们知道这对一个给定的模型来说已经足够了?


当前回答

config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

https://github.com/tensorflow/tensorflow/issues/1578

其他回答

# allocate 60% of GPU memory 
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf 
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.6
set_session(tf.Session(config=config))

当你构造一个tf时,你可以设置GPU内存的分配比例。会话通过传递一个tf。GPUOptions作为可选配置参数的一部分:

# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

per_process_gpu_memory_fraction充当同一台机器上每个GPU上的进程将使用的GPU内存量的硬上限。目前,这个分数统一应用于同一台机器上的所有gpu;没有办法在每个gpu基础上设置这个。

如果你正在使用Tensorflow 2,请尝试以下步骤:

config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config)
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

https://github.com/tensorflow/tensorflow/issues/1578

我尝试在voc数据集上训练unet,但由于图像大小巨大,内存结束。我尝试了上面所有的技巧,甚至尝试了batch size==1,但没有任何改善。有时候TensorFlow版本也会导致内存问题。尝试使用

PIP install tensorflow-gpu==1.8.0