How to find out if tensor flow uses cuda and cudnn or not?

I am using Ubuntu 16.04. Here is the tensor flow information:

>>> pip show tensorflow-gpu pip show tensorflow-gpu Name: tensorflow-gpu Version: 1.2.0 Summary: TensorFlow helps the tensors flow Home-page: http://tensorflow.org/ Author: Google Inc. Author-email: opensource@google.com License: Apache 2.0 Location: /home/xxxx/anaconda3/envs/tensorflow/lib/python3.5/site-packages Requires: markdown, backports.weakref, wheel, bleach, html5lib, protobuf, numpy, six, werkzeug 

Information about cuda:

 nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2015 NVIDIA Corporation Built on Tue_Aug_11_14:27:32_CDT_2015 Cuda compilation tools, release 7.5, V7.5.17 

When I import tensorflow in Python from an Ubuntu terminal, I don't get any download information, as shown below.

 >>> import tensorflow I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally 

If I run the python program in the terminal, I get other information.

 2017-06-20 16:08:18.075709: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-20 16:08:18.075733: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-20 16:08:18.075740: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-06-20 16:08:18.075744: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-06-20 16:08:18.075750: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-06-20 16:08:18.260629: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-06-20 16:08:18.261462: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: Quadro K620M major: 5 minor: 0 memoryClockRate (GHz) 1.124 pciBusID 0000:08:00.0 Total memory: 1.96GiB Free memory: 1.58GiB 2017-06-20 16:08:18.261514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-06-20 16:08:18.261524: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-06-20 16:08:18.261550: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Quadro K620M, pci bus id: 0000:08:00.0) 2 

How do I know if a tensor uses cuda and cudnn or not? What other information do I need to provide?

+5
source share
2 answers

You can check with nvidia-smi if the GPU is used by the python / tensorflow process. If there is no process using the GPU, the tensor stream does not use cuda and cudnn.

+1
source

I had a similar question in Windows - I wanted the GPU to be used and could not understand how to install the smi utility.

The most convincing way I've found to check if it uses a processor is to run the tutorial:

https://www.tensorflow.org/tutorials/layers

Major change required:

 # Create the Estimator config = tf.ConfigProto(log_device_placement=True) config.gpu_options.allow_growth = True run_config = tf.estimator.RunConfig().replace( session_config=config) mnist_classifier = tf.estimator.Estimator( model_fn=cnn_model_fn, model_dir="./mnist_convnet_model2", config = run_config) 

The log shows where it puts operations - GPU: 0 (you should see this on the console)

allow_growth stops CUDA crashes on my machine, immediately allocating all the memory. It took quite a while to find how to apply this to the evaluation - the documents can be slightly improved for the new users that I feel!

As soon as I started it, it not only quickly weakened compared to the processor-only version, but I could see the use of the GPU in the 70s-80s in the task manager!

0
source

Source: https://habr.com/ru/post/1269045/


All Articles