One possible reason for this view is that the GPU was not originally intended for general purpose computing. Also, GPU programming is less traditional and more hardcore and therefore more likely to be perceived as a hack.
The point that “you are transforming the problem into a matrix” is not reasonable. No matter what problem you solve when writing code, you choose reasonable data structures. In the case of GPU matrices, probably the most sensible data structures, and this is not a hack, but simply a natural choice for using them.
However, I believe it is a matter of time before GPGPU becomes widespread. People just have to get used to this idea. After all, who is interested in which unit of the computer launches the program?