我有一个使用分布式TensorFlow的计划,我看到TensorFlow可以使用gpu进行训练和测试。在集群环境中,每台机器可能有0个或1个或多个gpu,我想在尽可能多的机器上运行我的TensorFlow图。
我发现当运行tf.Session()时,TensorFlow在日志消息中给出了关于GPU的信息,如下所示:
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
我的问题是如何从TensorFlow获取当前可用GPU的信息?我可以从日志中获得加载的GPU信息,但我想以一种更复杂的编程方式来实现。
我也可以故意使用CUDA_VISIBLE_DEVICES环境变量限制GPU,所以我不想知道从OS内核获取GPU信息的方法。
简而言之,我想要一个函数像tf.get_available_gpu()将返回['/gpu:0', '/gpu:1']如果有两个gpu可用的机器。我如何实现这个?
用这种方法检查所有部件:
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
version = tf.__version__
executing_eagerly = tf.executing_eagerly()
hub_version = hub.__version__
available = tf.config.experimental.list_physical_devices("GPU")
print("Version: ", version)
print("Eager mode: ", executing_eagerly)
print("Hub Version: ", h_version)
print("GPU is", "available" if avai else "NOT AVAILABLE")
从TensorFlow 2.1开始,你可以使用tf.config.list_physical_devices('GPU'):
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
如果你安装了两个gpu,它会输出:
Name: /physical_device:GPU:0 Type: GPU
Name: /physical_device:GPU:1 Type: GPU
在TF 2.0中,您必须添加experimental:
gpus = tf.config.experimental.list_physical_devices('GPU')
See:
引导页
当前的API
我正在TF-2.1和torch上工作,所以我不想在任何ML框架中指定这个自动选择。我只使用原版的nvidia-smi和os。找到一个空的显卡。
def auto_gpu_selection(usage_max=0.01, mem_max=0.05):
"""Auto set CUDA_VISIBLE_DEVICES for gpu
:param mem_max: max percentage of GPU utility
:param usage_max: max percentage of GPU memory
:return:
"""
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
log = str(subprocess.check_output("nvidia-smi", shell=True)).split(r"\n")[6:-1]
gpu = 0
# Maximum of GPUS, 8 is enough for most
for i in range(8):
idx = i*3 + 2
if idx > log.__len__()-1:
break
inf = log[idx].split("|")
if inf.__len__() < 3:
break
usage = int(inf[3].split("%")[0].strip())
mem_now = int(str(inf[2].split("/")[0]).strip()[:-3])
mem_all = int(str(inf[2].split("/")[1]).strip()[:-3])
# print("GPU-%d : Usage:[%d%%]" % (gpu, usage))
if usage < 100*usage_max and mem_now < mem_max*mem_all:
os.environ["CUDA_VISIBLE_EVICES"] = str(gpu)
print("\nAuto choosing vacant GPU-%d : Memory:[%dMiB/%dMiB] , GPU-Util:[%d%%]\n" %
(gpu, mem_now, mem_all, usage))
return
print("GPU-%d is busy: Memory:[%dMiB/%dMiB] , GPU-Util:[%d%%]" %
(gpu, mem_now, mem_all, usage))
gpu += 1
print("\nNo vacant GPU, use CPU instead\n")
os.environ["CUDA_VISIBLE_EVICES"] = "-1"
如果我能得到任何GPU,它将CUDA_VISIBLE_EVICES设置为该GPU的BUSID:
GPU-0 is busy: Memory:[5738MiB/11019MiB] , GPU-Util:[60%]
GPU-1 is busy: Memory:[9688MiB/11019MiB] , GPU-Util:[78%]
Auto choosing vacant GPU-2 : Memory:[1MiB/11019MiB] , GPU-Util:[0%]
else,设置为-1使用CPU:
GPU-0 is busy: Memory:[8900MiB/11019MiB] , GPU-Util:[95%]
GPU-1 is busy: Memory:[4674MiB/11019MiB] , GPU-Util:[35%]
GPU-2 is busy: Memory:[9784MiB/11016MiB] , GPU-Util:[74%]
No vacant GPU, use CPU instead
注意:在导入任何需要GPU的ML帧之前使用这个函数,然后它会自动选择一个GPU。此外,你可以轻松设置多个任务。