Im creating some internal optimized matrix wrapper in java (using JNI). Do I need to state this, can you give some tips on optimizing matrices? What I'm going to implement:
A matrix can be represented as four sets of buffers / arrays, one for horizontal access, one for vertical access, one for diagonal access, and a command buffer for calculating matrix elements only if necessary. Here is an illustration.
Matrix signature: 0 1 2 3 4 5 6 7 8 9 1 3 3 5 2 9 First(hroizontal) set: horSet[0]={0,1,2,3} horSet[1]={4,5,6,7} horSet[2]={8,9,1,3} horSet[3]={3,5,2,9} Second(vertical) set: verSet[0]={0,4,8,3} verSet[1]={1,5,9,5} verSet[2]={2,6,1,2} verSet[3]={3,7,3,9} Third(optional) a diagonal set: diagS={0,5,1,9} //just in case some calculation needs this Fourth(calcuation list, in a "one calculation one data" fashion) set: calc={0,2,1,3,2,5}
So, as you can see above, when you need to access the elements of a column, the second set provides continuous access. No jumps. The same is achieved with the first set for horizontal access.
This should make some things easier, and some things more complicated:
Easier: **Matrix transpozing operation. Just swapping the pointers horSet[x] and verSet[x] is enough. **Matrix * Matrix multiplication. One matrix gives one of its horizontal set and other matrix gives vertical buffer. Dot product of these must be highly parallelizable for intrinsics/multithreading. If the multiplication order is inverse, then horizontal and verticals are switched. **Matrix * vector multiplication. Same as above, just a vector can be taken as horizontal or vertical freely. Harder: ** Doubling memory requirement is bad for many cases. ** Initializing a matrix takes longer. ** When a matrix is multiplied from left, needs an update vertical-->horizontal sets if its going to be multiplied from right after.(same for opposite) (if a tranposition is taken between, this does not count) Neutral: ** Same matrix can be multiplied with two other matrices to get two different results such as A=A*B(saved in horizontal sets) A=C*A(saved in vertical sets) then A=A*A gives A*B*C*A(in horizontal) and C*A*A*B (in vertical) without copying A. ** If a matrix always multiplied from left or always from right, every access and multiplication will not need update and be contiguous on ram. ** Only using horizontals before transpozing, only using verticals after, should not break any rules.
The main purpose is to have a matrix (a multiple of 8, a multiple of 8) in size and apply avx intrinsics with multiple threads (each tread runs on multiple at the same time).
I have achieved only a vector stock product. I will go into this, if you know programming, give direction.
The dotproduct I wrote (with internal functions) is 6 times faster than the cycle-deployed version (which is twice as fast as multiplying one at a time), also stucks while limiting memory bandwidth when multithreading is enabled in the shell (8x β uses almost 20 GB / s, which is close to my ddr3 limit) I tried opencl already and it is slow for the processor, but great for gpu.
Thanks.
Edit: How will the "Block Matrix" buffer be executed? When multiplying large matrices, small patches are multiplied in a special way, and the cache is probably used to reduce access to main memory. But this will require even more updates between matrix multiplications between the vertical-horizontal diagonal and this block.