I developed a training AI for use in an iPad game, but its prototype is written in Matlab. I need to perform several actions, such as: (capital denotes a matrix)
A = B > c; A = B * C; A = B' * C; A = B .* C; A = B - C;
And some of my matrices are large (2601 x 100). Performing these operations on the GPU can significantly improve the performance of this operation. Is there a linear algebra library more suitable than BLAS acceleration? BLAS only supports up to 4x4 matrices.
source share