CUDA - simple matrix add / add operation

It should be very simple, but I could not find an exhaustive answer:

I need to execute A + B = C with matrices, where A and B are two matrices of unknown size (they can be 2x2 or 20.000x20.000 as the largest value)

Should I use CUBLAS with the Sgemm function to calculate?

I need maximum speed, and I thought about the CUBLAS library, which should be well optimized.

+4
source share
3 answers

For any technical computing, you should always use optimized libraries when available. Existing libraries used by hundreds of other people will be better tested and better optimized than everything that you do yourself, and the time when you do not spend time writing (and debugging and optimization) that work independently, you can better spend real work a high-level problem that you want to solve, instead of rediscovering things that other people have already implemented. This is simply the main specialization of labor; focus on the computational problem you want to solve, and let the people who spend their days professionally writing GPGPU routines do it for you.

Only when you are sure that existing libraries do not do what you need, maybe they solve a too general problem or make certain assumptions that are not fulfilled in your case - if you abandon your own.

I agree with others that in this particular case, the operation is quite simple, and this is possible for DIY; but if you are going to do anything else with these matrices as soon as you add them, you are better off using optimized BLAS routines for any platform you are on.

+3
source

What you want to do would be trivial to implement in CUDA and would be limited by bandwidth.

+1
source

And since CUBLAS5.0, cublasgeam can be used for this. It calculates the weighted sum of 2 optionally transposed matrices.

+1
source

Source: https://habr.com/ru/post/1345118/


All Articles