I have an entire M * N matrix that I need to go through and calculate for each element M [i] [j]:
The integer which appears most often in the submatrix from (i-k,j-k) to (i+k, j+k).
Thus, the result is a matrix with each cell being the dominant number around [i, j] in the original matrix.
The matrix can be very large, and I need to perform this operation in a narrow cycle, so I want to minimize the runtime in parallel computing.
I know that the GPU is good at matrix multiplication, but it doesn't seem like it comes down to simple matrix multiplication. (or maybe it?)
Is it possible to calculate each cell in parallel on the GPU? And if so, I want to implement it in iOS, what programming interface should I use, Metal? Opengl
source
share