Can I do similar parallel computing on a GPU?

I have an entire M * N matrix that I need to go through and calculate for each element M [i] [j]:

The integer which appears most often in the submatrix from (i-k,j-k) to (i+k, j+k).

Thus, the result is a matrix with each cell being the dominant number around [i, j] in the original matrix.

The matrix can be very large, and I need to perform this operation in a narrow cycle, so I want to minimize the runtime in parallel computing.

I know that the GPU is good at matrix multiplication, but it doesn't seem like it comes down to simple matrix multiplication. (or maybe it?)

Is it possible to calculate each cell in parallel on the GPU? And if so, I want to implement it in iOS, what programming interface should I use, Metal? Opengl

+4
source share
1 answer

Yes, you can do this calculation on the GPU.

Metal seems to be for both graphic and general purpose. Therefore, you should use it for your needs (here is the article: http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad -gpu / )

Accelerate can also meet our needs.

I hope for this help.

+1
source

Source: https://habr.com/ru/post/1671849/


All Articles