Fast way to transform or point matrix kxnxn

Is there a quick way to calculate the inverse of the kxnxn matrix using numpy (inverse calculation on each k-segment)? In other words, is there a way to vectorize the following code:

>>>from numpy.linalg import inv >>>a-random(4*2*2).reshape(4,2,2) >>>b=a.copy() >>>for k in range(len(a)): >>> b[k,:,:] = inv(a[k,:,:]) 
+4
source share
1 answer

First, about getting the opposite. I looked at both np.linalg.tensorinv and np.linalg.tensorsolve .

I think that unfortunately tensorinv will not give you what you want. To do this, the array must be "square". This excludes what you want to do because their definition of a square is that np.prod(a[:i]) == np.prod(a[i:]) where i is 0, 1 or 2 ( one of the axes of the array in general); this can be given as the third argument ind of tensorinv . This means that if you have a common array of NxN matrices of length M, you need to have, for example, (for i = 1) NxN == NxM, which is not true at all (it is really true in your example, but it does not give correct answer anyway).

Now, perhaps something is possible with tensorsolve . This, however, will be associated with some heavy construction work in the matrix array a before it is passed as the first argument to tensorsolve . Since we would like b be the solution of the "matrix-matrix equation" a*b = 1 (where 1 is an array of identical matrices), and 1 would have the same form as a and b , we cannot just specify a , which you defined above as the first argument to tensorsolve . Rather, it should be an array with the form (M, N, N, M, N, N) or (M, N, N, N, M, N) or (M, N, N, N, N, M). This is necessary because tensorsolve will be multiplied by b along these last three axes, as well as sum them so that the result (second argument of the function) again has the form (M, N, N).

Then, secondly, about point products (your name suggests that it is also part of your question). It is very comfortable. Two options.

First: this James Hensman blog post gives some good suggestions.

Second: I personally like to use np.einsum for more clarity. For instance:.

 a=np.random.random((7,2,2)) b=np.random.random((7,2,2)) np.einsum('ijk,ikl->ijl', a,b) 

This will multiply the matrix by all 7 "matrices" in arrays a and b . It seems to be about 2 times slower than the array method from the above blog, but it is still about 70 times faster than using a for loop, as in your example. In fact, with larger arrays (e.g. 10,000 5x5 matrices) the einsum method seems a little faster (not sure why).

Hope this helps.

+3
source

Source: https://habr.com/ru/post/1396498/


All Articles