Effective replacement of elements in a numpy array

Assuming that we have a large matrix A and the indices of the two matrix elements (c1, r1), (c2, r2) that we want to swap:

import numpy as np A = np.random.rand(1000,1000) c1, r1 = 10, 10 c2, r2 = 20, 40 

The pythonic way to do this:

 A[c1, r1], A[c2, r2] = A[c2, r2], A[c1, r1] 

However, this solution can be slow if you want to make a large number of swaps.

Is there a more efficient way to replace two elements in a numpy array?

Thanks in advance.

+6
source share
2 answers

Preliminary answer that doesn't work

You can easily vectorize the swap operation using arrays of indices (c1, r1, c2, r2) instead of repeating lists of scalar indices.

 c1 = np.array(<all the "c1" values>, dtype=int) r1 = np.array(<all the "r1" values>, dtype=int) c2 = np.array(<all the "c2" values>, dtype=int) r2 = np.array(<all the "r2" values>, dtype=int) A[c1, r1], A[c2, r2] = A[c2, r2], A[c1, r1] 

Note that this performs all swaps in one pass, which may be different from iterative if the exchange order matters. For this reason, this is an invalid answer to your question .

eg. replacing p1 by p2, then p2 with p3 differs from the substitution of p1 and p2, and p2 and p3 at a time. In the latter case, both p1 and p3 get the original value p2, and p2 gets the last of the values ​​between p1 and p3, i.e. p3 (in accordance with the order that they appear in the index array).

However, since this is the speed you need, a vector operation (in some way) should be the way to go.


Adding correctness to the above solution

So, how can we do a vectorized replacement, getting the desired result? We can use a hybrid approach, breaking index lists into chunks (as small as possible), where each chunk contains only unique points, thereby guaranteeing order. The permutation of each fragment is performed using scororized-ly, and we only iterate over the pieces.

Here is an example of how this might work:

 c1, r1 = np.array([ np.arange(10), np.arange(10) ]) c2, r2 = np.array([ [2,4,6,8,0,1,3,5,7,9], [9,1,6,8,2,2,2,2,7,0] ]) A = np.empty((15,15)) def get_chunk_boundry(c1, r1, c2, r2): a1 = c1 + 1j * r1 a2 = c2 + 1j * r2 set1 = set() set2 = set() for i, (x1, x2) in enumerate(zip(a1, a2)): if x1 in set2 or x2 in set1: return i set1.add(x1); set2.add(x2) return len(c1) while len(c1) > 0: i = get_chunk_boundry(c1, r1, c2, r2) c1b = c1[:i]; r1b = r1[:i]; c2b = c2[:i]; r2b = r2[:i] print 'swapping %d elements' % i A[c1b,r1b], A[c2b,r2b] = A[c2b,r2b], A[c1b,r1b] c1 = c1[i:]; r1 = r1[i:]; c2 = c2[i:]; r2 = r2[i:] 

Slicing here will be faster if you start storing indexes as a 2dim (N x 4) array.

+2
source

Here's an iterative solution that I wrote for reference purposes (as one way of solving possible repeating elements):

 def iter2(A, rc1, rc2): for r,c in zip(rc1.T, rc2.T): j,k = tuple(r),tuple(c) A[j],A[k] = A[k],A[j] return A 

For example, if:

 N = 4 Aref=np.arange(N)+np.arange(N)[:,None]*10 rc1=np.array([[0,0,0],[0,3,0]]) rc2=np.array([[3,3,2],[3,0,2]]) 

then

  print(iter2(A.copy(), rc1,rc2)) 

creates a corner swap, and then swap with (2,2):

 [[22 1 2 30] [10 11 12 13] [20 21 33 23] [ 3 31 32 0]] 
Decision

shx2's seems to do the same, although for my test case there seems to be a bug in the chunking function.

For the shx2's test array (15,15) my iterative solution is 4 times faster! He does more swaps, but less work per swap. For large arrays, I expect chunking to be faster, but I don't know how much more we would have to go. This will also depend on the repetition pattern.

Mute vectorized swap with my arrays:

 A[tuple(rc1)],A[tuple(rc2)] = A[tuple(rc2)],A[tuple(rc1)] 

In this example (15,15) it is only 20% faster than my iterative solution. It is clear that we need a very large test case in order to produce serious timings.


An iterative solution is faster and easier when working with a 1d array. Even with a split and a change in this function, this is the fastest I have found. I don't get much speed improvement in Cython about this. (but precautionary measures regarding array sizes still apply.)

 def adapt_1d(A, rc1, rc2, fn=None): # adapt a multidim case to use a 1d iterator rc2f = np.ravel_multi_index(rc2, A.shape) rc1f = np.ravel_multi_index(rc1, A.shape) Af = A.flatten() if fn is None: for r,c in zip(rc1f,rc2f): Af[r],Af[c] = Af[c],Af[r] else: fn(Af, rc1f, rc2f) return Af.reshape(A.shape) 
+1
source

Source: https://habr.com/ru/post/982782/


All Articles