CUDA: sharing data between multiple devices?

the CUDA C Programming Guide says that

... by design, a host thread can only execute device code on one device at any given time. As a result, it takes several host threads to execute device code on multiple devices. In addition, any CUDA resources created through the runtime in one host thread cannot be used by the runtime from another host thread ...

What I wanted to do was make two graphic files for sharing data on the host (mapped memory),
but the manual seems to say that this is not possible.
Is there any solution for this

+3
source share
6

, cudaHostAlloc() cudaHostAllocPortable. CUDA.

+3

GMAC. , CUDA, . GPU . , , , -.

http://code.google.com/p/adsm/

+1

, cudaHostAllocPortable cudaHostAlloc(). , . , , , . cudaHostGetDevicePointer(), ( ).

. 3.2.5.3 CUDA (v3.2):

, (. 3.2.5.1), , , cudaHostGetDevicePointer() , .

0

NVIDIA , gpus , , gpus , ( ). , " CUDA" , , , ( openmp mpi). , , .

, , / .

Thus, you cannot access gpu1 from gpu2 (even with sli - which I screamed because I was not connected at all with cuda). however, you can take gpu1, write to an area on the host, and then take gpu2 and write to another region and allow the threads controlling each device to write the necessary data to fix gpu.

0
source

Source: https://habr.com/ru/post/1773998/


All Articles