Read already allocated memory / vector in Thrust

I load a simple variable into the GPU memory using Mathematica:

mem = CUDAMemoryLoad[{1, 2, 3}] 

And get the following result:

 CUDAMemory["<135826556>", "Integer32"] 

Now, using this data in the GPU's memory, I want to access it from a separate .cu program (outside of Mathematica) using Thrust.

Is there any way to do this? If so, can someone explain how?

+4
source share
1 answer

No, there is no way to do this. CUDA contexts are private, and there is no way in standard APIs for a memory access process that is allocated in the context of other processes.

During the CUDA 4 cycle, the new cudaIpc API was released. This allows two processes with CUDA contexts running on the same host to export and exchange descriptors for GPU memory allocation. The API is only supported on Linux hosts running unified addressing support. As far as I know, Mathematica does not currently support this.

+4
source

Source: https://habr.com/ru/post/1445877/


All Articles