How can I calculate the zero space / core (x: M · x = 0) of a sparse matrix in Python?

I found several examples on the Internet showing how to find the zero space of a regular matrix in Python, but I could not find examples for a sparse matrix ( scipy.sparse.csr_matrix ).

In zero space, I mean x such that M · x = 0 , where '·' is matrix multiplication. Does anyone know how to do this?

Also, in my case, I know that the empty space will consist of one vector. Can this information be used to increase the effectiveness of the method?

+5
source share
1 answer

This is not a complete answer, but hopefully this will be the starting point for one thing. You should be able to calculate empty space using a variant of the SVD-based approach shown for dense matrices in this question :

 import numpy as np from scipy import sparse import scipy.sparse.linalg def rand_rank_k(n, k, **kwargs): "generate a random (n, n) sparse matrix of rank <= k" a = sparse.rand(n, k, **kwargs) b = sparse.rand(k, n, **kwargs) return a.dot(b) # I couldn't think of a simple way to generate a random sparse matrix with known # rank, so I'm currently using a dense matrix for proof of concept n = 100 M = rand_rank_k(n, n - 1, density=1) # # this seems like it ought to work, but it doesn't # u, s, vh = sparse.linalg.svds(M, k=1, which='SM') # this works OK, but obviously converting your matrix to dense and computing all # of the singular values/vectors is probably not feasible for large sparse matrices u, s, vh = np.linalg.svd(M.todense(), full_matrices=False) tol = np.finfo(M.dtype).eps * M.nnz null_space = vh.compress(s <= tol, axis=0).conj().T print(null_space.shape) # (100, 1) print(np.allclose(M.dot(null_space), 0)) # True 

If you know that x is a single line vector, then in principle you will need to calculate the smallest value of a single value / vector M. This should be possible with scipy.sparse.linalg.svds , scipy.sparse.linalg.svds .:

 u, s, vh = sparse.linalg.svds(M, k=1, which='SM') null_space = vh.conj().ravel() 

Unfortunately, scipy svds seems to behave badly when finding small singular values ​​of singular or almost singular matrices and usually either returns NaNs or throws an ArpackNoConvergence error.

Currently, I don’t know an alternative implementation of truncated SVD with Python bindings that will work on sparse matrices and can selectively find the smallest singular values ​​- maybe someone knows about this?

Edit

As a side note, the second approach works quite well using MATLAB or Octave svds :

 >> M = rand(100, 99) * rand(99, 100); % svds converges much more reliably if you set sigma to something small but nonzero >> [U, S, V] = svds(M, 1, 1E-9); >> max(abs(M * V)) ans = 1.5293e-10 
+1
source

Source: https://habr.com/ru/post/1234767/


All Articles