我已经在我的ubuntu 16.04中安装了tensorflow,使用的是ubuntu内置的apt cuda安装。
现在我的问题是,我如何测试tensorflow是否真的使用gpu?我有一个gtx 960m gpu。当我导入tensorflow时,这是输出
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
这个输出是否足够检查tensorflow是否使用gpu ?
这是我用来列出tf可用设备的行。会话直接从bash:
python -c "import os; os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'; import tensorflow as tf; sess = tf.Session(); [print(x) for x in sess.list_devices()]; print(tf.__version__);"
它将打印可用的设备和tensorflow版本,例如:
_DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 268435456, 10588614393916958794)
_DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 12320120782636586575)
_DeviceAttributes(/job:localhost/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 13378821206986992411)
_DeviceAttributes(/job:localhost/replica:0/task:0/device:GPU:0, GPU, 32039954023, 12481654498215526877)
1.14.0
下面的代码段应该给出tensorflow可用的所有设备。
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
样例输出
(名字:“/ cpu: 0”
device_type:“CPU”
memory_limit: 268435456
位置{
}
化身:4402277519343584096,
名称:“/ gpu: 0”
device_type:“GPU”
memory_limit: 6772842168
位置{
bus_id: 1
}
化身:7471795903849088328
physical_device_desc: "设备:0,名称:GeForce GTX 1070, pci总线id: 0000:05:00.0"
]