Using a low voltage GPU using tensor flow

I'm having trouble getting tenorflow to efficiently use the Nvidia GeForce GTX 1080 GPU on my system. I reduced my code to the very simple version shown below; I only perform the session.run () operation, which should use the GPU, the data is retrieved only once and reused inside the loop, so this code should only use the GPU.

input_training_data=self.val_data[batch_size, :] input_training_label=self.val_label[batch_size, :] feed_dict = self.get_feed_dict(input_training_data, input_training_label) for i in range(1000): acc = sess.run(cost, feed_dict)

I noticed that for batch_size = 16 I get mostly stable GPU usage of about 8%, as I increase batch_size to 32 and the maximum GPU usage increases to 9-12%, but usage remains mostly at 0% and from time to time it jumps to 15% -25% and immediately drops to 0%. These patterns continue for larger serial files, basically any batch size greater than 16 increases maximum usage, but usage remains mostly at 0 and only increases occasionally. What am I missing here?

+6
source share
1 answer

I had the same problem. My problem was that my calculations were partially performed on the GPU and CPU (therefore, there was a lot of communication between the two devices, which led to a decrease in the use of the GPU) I read in another thread:

  • Do not use data_dictornaries in loops (use Tensorvariable instead)
  • There was a problem with the float64 data type (which should only be calculated for the CPU) β†’ Use the float32 data type (if possible)
  • use the Profiler suggested by Olivier Moindro to check if something is explicitly running on your CPU. Then try to bring everything to the GPU

By the way, a hint regarding your code: Your session (default schedule) will grow at each iteration until an OutOfMemory exception is possible ... β†’ I close the session with each x-iteration when resetting the default schedule ...

+2
source

Source: https://habr.com/ru/post/1013520/


All Articles