Use shared GPU memory with TensorFlow?

So, I installed the version of TensorFlow on a Windows 10 computer with a graphics card GeForce GTX 980.

Admittedly, I know very little about video cards, but according to dxdiag it has:

4060MB dedicated memory (VRAM) and;

8163MB shared memory

a total of about 12224MB.

However, I noticed that this “shared” memory seems almost useless. When I start training the model, VRAM will fill up, and if the memory demand exceeds these 4GB, TensorFlow will fail with the error message "resource exhausted."

I CAN, of course, not allow this by choosing a batch size low enough, but I really wonder if there is a way to use these "extra" 8GBRAM, or if TensorFlow requires dedicated memory.

+4
source share
1 answer

Shared memory is the area of ​​main memory reserved for graphics. References:

https://en.wikipedia.org/wiki/Shared_graphics_memory

https://www.makeuseof.com/tag/can-shared-graphics-finally-compete-with-a-dedicated-graphics-card/

https://youtube.com/watch?v=E5WyJY1zwcQ

This type of memory is what integrated graphics typically use, such as the Intel HD series.

NVIDIA, CUDA . Tensorflow GPU, CUDA , CPU, .

CUDA - . , RAM 10 , GPU, - GPU - ( ) PCIE.

: GeForce GTX 980: 224 / DDR4 : 25 / PCIe 16x: 16 /

. , PCIe , , , .

, NVIDIA? . :

(a) NVIDIA, Intel (, ). Intel / Intel HD BIOS .

() NVIDIA . , , .. , , GPU. NVIDIA , .

, , Tensorflow.

+3

Source: https://habr.com/ru/post/1690938/


All Articles