I think the problem is that you are within the limits of floating point precision. A good rule for linear algebra results is that they are good for about one part at 10 ^ 8 for float32 and about one part at 10 ^ 16 for float 64. It seems that the ratio of your smallest to largest eigenvalue is less than 10 ^ -16 . Because of this, the return value cannot be really reliable and will depend on the details of the implementation of your own values that you use.
For example, there are four different solvers that you must have; take a look at their results:
# using the 64-bit version for impl in [np.linalg.eig, np.linalg.eigh, LA.eig, LA.eigh]: w, v = impl(H) print(np.sort(w)) reconstructed = np.dot(v * w, v.conj().T) print("Allclose:", np.allclose(reconstructed, H), '\n')
Output:
[ -3.01441754e+02 4.62855397e+16 5.15260753e+18] Allclose: True [ 3.66099625e+02 4.62855397e+16 5.15260753e+18] Allclose: True [ -3.01441754e+02+0.j 4.62855397e+16+0.j 5.15260753e+18+0.j] Allclose: True [ 3.83999999e+02 4.62855397e+16 5.15260753e+18] Allclose: True
Note that they all agree with the larger two eigenvalues, but the value of the smallest eigenvalue varies from implementation to implementation. However, in all four cases, the input matrix can be reconstructed with an accuracy of 64 bits: this means that the algorithms work as expected to their available accuracy.
source share