scipy.sparse.linalg.spsolve is pretty clear, but it seems like for speed you should
pip install scikit-umfpack
or more
- create UMFPACK and AMD with SuiteSparse
- then reinstall scipy from the source, with
[umfpack] umfpack_libs = ... in site.cfg.
otherwise, scipy.sparse.linalg is used by default for the slower SuperLU.
Is there a scikit.sparse path?
Compared to what criteria? If C / C ++ is enough for you, use SuiteSparse directly. Any tool depends on what is convenient for you, and on users: one, two, many. Perhaps better visualization will help your project faster than spsolve faster.
Some pretty obvious pros and cons of scipy.sparse:
+ python for quick development, data entry - matrixes - visualize
+ several packages are built on scipy.sparse; ask in your application area (which one is there?)
- rough edges (matrices - pain), with afaik no wiki for collecting hints and code fragments
- layers on layers, scipy.sparse - SuiteSparse -... BLAS ... make synchronization and debug hard.
Fwiw, crucial times vary a lot on my iMac. All this with default arguments, without umfpack.
(This is NOT a REAL test, but satisficing is often enough.)
X = sparse.rand( m, n, dens, format="csr" ) A = 1e-6 * sparse.eye(m) + X * XT linalg solvers( A, b ) (5000, 5000) density 0.42 % -- 51 msec spsolve 5 msec bicg 3 msec bicgstab 2 msec cg 4 msec cgs 3 msec gmres 4 msec lgmres 1 msec minres 6 msec qmr 5 msec lsmr (5000, 5000) density 0.84 % -- 428 msec spsolve 12 msec bicg 7 msec bicgstab 5 msec cg 10 msec cgs 6 msec gmres 8 msec lgmres 2 msec minres 13 msec qmr 12 msec lsmr (5000, 5000) density 1.3 % -- 1462 msec spsolve 16 msec bicg 9 msec bicgstab 7 msec cg 11 msec cgs 7 msec gmres 10 msec lgmres 1 msec minres 18 msec qmr 14 msec lsmr