scipy.sparse has several formats, although only a couple has an efficient set of numerical operations. Unfortunately, this is harder to expand.
dok uses a tuple of indices as dictionary keys. So it would be easy to generalize from 2d to 3d or more. coo has attributes row , col , data . It's clear that adding a third depth (?) Is easy. lil may need lists in lists that can become messy.
But csr and csc store the array in indices , indptr and data arrays. This format was developed many years ago by mathematicians working with problems of linear algebra, as well as effective mathematical operations (multiplication of esp matrices). (The corresponding article is cited in the source code).
Thus, the representation of three-dimensional sparse arrays is not a problem, but the implementation of effective vector operations may require fundamental mathematical research.
Do you really need a 3D layout for vector operations? Could you, for example, change 2 sizes to 1, at least temporarily?
Element operations on elements (*, +, -) also work with flattened array data, as with version 2 or 3. np.tensordot handles multiplication of the nD matrix by converting inputs into 2D arrays and applying np.dot . Even when np.einsum used on 3D arrays, product summation is usually performed on only one measurement pair (for example, "ijk, jl-> ikl")
A 3D representation may be conceptually convenient, but I cannot come up with a mathematical operation that requires it (instead of 2 or 1d).
In general, I think that you will get more speed from rebuilding your arrays than from finding / implementing real three-dimensional sparse operations.