Find pairs of strings in a numpy array that differ only in sign

I need to find the row indices of all the rows in a numpy array that differ only in sign. For example, if I have an array:

>>> A array([[ 0, 1, 2], [ 3, 4, 5], [ 0, -1, -2], [ 9, 5, 6], [-3, -4, -5]]) 

I would like the result to be [(0,2),(1,4)]

I know how to find unique numpy.unique strings, so my intuition was to add an array to the negation of myself, i.e. numpy.concatenate (A, -1 * A), and then find non-unique strings, but I messed up on how to extract the information I need. Also, an array can be quite large, so adding it to yourself can be a great idea.

I get the correct answer by simply going through the array and checking if the row index is equal to the negation of another row index, but this takes a lot of time. I would like something as fast as numpy.unique.

I already deleted all duplicate rows from A, if that makes any difference in the process.

+5
source share
6 answers

Here is mainly based on NumPy -

 def group_dup_rowids(a): sidx = np.lexsort(aT) b = a[sidx] m = np.concatenate(([False], (b[1:] == b[:-1]).all(1), [False] )) idx = np.flatnonzero(m[1:] != m[:-1]) C = sidx.tolist() return [C[i:j] for i,j in zip(idx[::2],idx[1::2]+1)] out = group_dup_rowids(np.abs(a)) 

Run Example -

 In [175]: a Out[175]: array([[ 0, 1, 2], [ 3, 4, 5], [ 0, -1, -2], [ 9, 5, 6], [-3, -4, -5]]) In [176]: group_dup_rowids(np.abs(a)) Out[176]: [[0, 2], [1, 4]] 

The exact case of denial

In the case when you are looking for paired matches with exact negation, we just need a little modification -

 def group_dup_rowids_negation(ar): a = np.abs(ar) sidx = np.lexsort(aT) b = ar[sidx] m = np.concatenate(([False], (b[1:] == -b[:-1]).all(1), [False] )) idx = np.flatnonzero(m[1:] != m[:-1]) C = sidx.tolist() return [(C[i:j]) for i,j in zip(idx[::2],idx[1::2]+1)] 

Run Example -

 In [354]: a Out[354]: array([[ 0, 1, 2], [ 3, 4, 5], [ 0, -1, -2], [ 9, 5, 6], [-3, -4, -5]]) In [355]: group_dup_rowids_negation(a) Out[355]: [[0, 2], [1, 4]] In [356]: a[-1] = [-3,4,-5] In [357]: group_dup_rowids_negation(a) Out[357]: [[0, 2]] 

Runtime test

Other work approaches -

 # @Joe Iddon soln def for_for_if_listcompr(a): return [(i, j) for i in range(len(a)) for j in range(i+1, len(a)) if all(a[i] == -a[j])] # @dkato soln def find_pairs(A): res = [] for r1 in range(len(A)): for r2 in range(r1+1, len(A)): if all(A[r1] == -A[r2]): res.append((r1, r2)) return res 

Dates -

 In [492]: # Setup bigger input case ...: import pandas as pd ...: np.random.seed(0) ...: N = 2000 # datasize decider ...: a0 = np.random.randint(0,9,(N,10)) ...: a = a0[np.random.choice(len(a0),4*N)] ...: a[np.random.choice(len(a),2*N, replace=0)] *= -1 ...: a = pd.DataFrame(a).drop_duplicates().values In [493]: %timeit for_for_if_listcompr(a) ...: %timeit find_pairs(a) 1 loop, best of 3: 6.1 s per loop 1 loop, best of 3: 6.05 s per loop In [494]: %timeit group_dup_rowids_negation(a) 100 loops, best of 3: 2.05 ms per loop 

Further improvements

 def group_dup_rowids_negation_mod1(ar): a = np.abs(ar) sidx = np.lexsort(aT) b = ar[sidx] dp = view1D(b) dn = view1D(-b) m = np.concatenate(([False], dp[1:] == dn[:-1], [False] )) return zip(sidx[m[1:]], sidx[m[:-1]]) def group_dup_rowids_negation_mod2(ar): a = np.abs(ar) sidx = lexsort_cols_posnum(a) b = ar[sidx] dp = view1D(b) dn = view1D(-b) m = np.concatenate(([False], dp[1:] == dn[:-1], [False] )) return zip(sidx[m[1:]], sidx[m[:-1]]) 

Secondary functions:

 # https://stackoverflow.com/a/44999009/ @Divakar def view1D(a): # a is array a = np.ascontiguousarray(a) void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1])) return a.view(void_dt).ravel() # Used to convert each row as a scalar by considering each of them as # an indexing tuple and getting argsort indices def lexsort_cols_posnum(ar): shp = ar.max(0)+1 s = np.concatenate((np.asarray(shp[1:])[::-1].cumprod()[::-1],[1])) return ar.dot(s).argsort() 

Runtime Test (borrowed from @Paul Panzer benchmarking) -

 In [628]: N = 50000 # datasize decider ...: a0 = np.random.randint(0,99,(N,3)) ...: a = a0[np.random.choice(len(a0),4*N)] ...: a[np.random.choice(len(a),2*N, replace=0)] *= -1 ...: # OP says no dups ...: a = np.unique(a, axis=0) ...: np.random.shuffle(a) In [629]: %timeit use_unique(a) # @Paul Panzer soln 10 loops, best of 3: 33.9 ms per loop In [630]: %timeit group_dup_rowids_negation(a) 10 loops, best of 3: 54.1 ms per loop In [631]: %timeit group_dup_rowids_negation_mod1(a) 10 loops, best of 3: 37.4 ms per loop In [632]: %timeit group_dup_rowids_negation_mod2(a) 100 loops, best of 3: 17.3 ms per loop 
+6
source

You can do this in one-liner :

 [(i, j) for i in range(len(a)) for j in range(i+1, len(a)) if all(abs(a[i]) == abs(a[j]))] 

which for your a gives:

 [(0, 2), (1, 4)] 

So, we mainly use a nested for-loops loop through each pair of rows - i and j . Then we check if each element (using all ) in the first row ( == ) is equal to each element of another row . However, to introduce the absolute aspect, we first take the abs() each row before comparing.


Oh, and for the exact negation :

 [(i, j) for i in range(len(a)) for j in range(i+1, len(a)) if all(a[i] == -a[j])] 

which gives the same result for this example, but will obviously change for other arrays .

+2
source

Try:

 A = [[0,1,2],[3,4,5],[0,-1,-2],[9,5,6],[-3,-4,-5]] outlist = [] c = 1 while len(A) > 1: b = list(map(lambda x: -x, A[0])) A = A[1:] for i in range(len(A)): if A[i] == b: outlist.append((c-1, c+i)) c += 1 print(outlist) 

Output:

 [(0, 2), (1, 4)] 
+1
source

Here is the version of the Joe Iddon feature. The main difference is the if-statement: if the pair [1, 2, 3] and [-1, 2, 3] are correct, then I think that the statement Joe if is correct.

 def find_pairs(A): res = [] for r1 in range(len(A)): for r2 in range(r1+1, len(A)): if all(A[r1] == -A[r2]): res.append((r1, r2)) return res 
+1
source

Here is a quick np.unique solution. numpy1.13 is required for this.

 import numpy as np # Divakar method for reference def group_dup_rowids_negation(ar): a = np.abs(ar) sidx = np.lexsort(aT) b = ar[sidx] m = np.concatenate(([False], (b[1:] == -b[:-1]).all(1), [False] )) idx = np.flatnonzero(m[1:] != m[:-1]) C = sidx.tolist() return [(C[i:j]) for i,j in zip(idx[::2],idx[1::2]+1)] def use_unique(a): sign = np.sign(a) nz = np.flatnonzero(sign) firstnz = np.searchsorted(nz, np.arange(0, a.size, a.shape[1])) a_nrm = np.where(sign.ravel()[nz[firstnz], None]==-1, -a, a) uniq, idx, inv, cnt = np.unique(a_nrm, True, True, True, axis=0) dup = np.flatnonzero(cnt==2) out = np.empty((len(dup), 2), dtype=int) out[:, 0] = idx[dup] idx[inv] = np.arange(len(inv)) out[:, 1] = idx[dup] return out N = 50000 # datasize decider a0 = np.random.randint(0,99,(N,3)) a = a0[np.random.choice(len(a0),4*N)] a[np.random.choice(len(a),2*N, replace=0)] *= -1 # OP says no dups a = np.unique(a, axis=0) np.random.shuffle(a) idxd = np.array(group_dup_rowids_negation(a)) idxp = use_unique(a) assert len(idxd) == len(idxp) assert not np.any(np.sum(a[idxd, :], axis=1)) assert not np.any(np.sum(a[idxp, :], axis=1)) assert {frozenset(i) for i in idxd} == {frozenset(i) for i in idxp} from timeit import timeit gl = {'a': a} for fun, tag in [(group_dup_rowids_negation, 'D '), (use_unique, 'pp')]: gl['f'] = fun print(tag, timeit('f(a)', number=10, globals=gl)) 

Output Example:

 D 0.5263204739894718 pp 0.3610327399801463 
0
source

Since you are saying that your array A is unique, how about this?

 import itertools as it In [3]: idxs_comb = list(it.combinations(range(A.shape[0]), 2)) In [4]: rows_comb = it.combinations(A, 2) In [5]: [idxs_comb[idx] for idx, pair in enumerate(rows_comb) if np.sum(pair) == 0] Out[6]: [(0, 2), (1, 4)] 
0
source

Source: https://habr.com/ru/post/1273651/


All Articles