I'm having trouble getting tenorflow to efficiently use the Nvidia GeForce GTX 1080 GPU on my system. I reduced my code to the very simple version shown below; I only perform the session.run () operation, which should use the GPU, the data is retrieved only once and reused inside the loop, so this code should only use the GPU.
input_training_data=self.val_data[batch_size, :] input_training_label=self.val_label[batch_size, :] feed_dict = self.get_feed_dict(input_training_data, input_training_label) for i in range(1000): acc = sess.run(cost, feed_dict)
I noticed that for batch_size = 16 I get mostly stable GPU usage of about 8%, as I increase batch_size to 32 and the maximum GPU usage increases to 9-12%, but usage remains mostly at 0% and from time to time it jumps to 15% -25% and immediately drops to 0%. These patterns continue for larger serial files, basically any batch size greater than 16 increases maximum usage, but usage remains mostly at 0 and only increases occasionally. What am I missing here?
source share