Mass matrix multiplication by GPU or CPU?

What do you think? What will be faster and how much faster: Performing dilution of a sparse matrix (CSR) (with a vector) by a GPU or CPU (multithreading)?

+3
source share
3 answers

It depends on the size of the matrix and the number of iterations that need to be performed. This is because you need to copy the matrix data from the CPU memory to the GPU memory and copy the results from the GPU to the CPU. If you are going to perform only one iteration on the matrix, it is always better to do it on the processor, rather than doing it on the GPU. In addition, the GPU suffers from startup time. So, if you have more iterations that need to be done, go to the GPU, otherwise my option would be CPU. Similarly, matrix size also affects performance due to data copying.

+4
source

My guess is that there will be no big victory with the implementation of the GPU, since you do not have a homogeneous data structure that attaches itself to parallel processing.

0

I think the Veda hits the nail on the head. I am by no means an expert on this issue, but I believe that there is overhead for gpu to work, and if the calculation size is small, gpu processing gains are lost overhead. However, if you have something like a character skeleton where many factors multiply, this will be more suitable for gpu. I am currently also studying these things for my project.

0
source

Source: https://habr.com/ru/post/1758698/


All Articles