Scipy eigh gives negative eigenvalues ​​for a positive semidefinite matrix

I am having some problems with the scipy eigh function returning negative eigenvalues ​​for positive semidefinite matrices. Below is the MWE.

The hess_R function returns a positive semidefinite matrix (it is the sum of a matrix of rank one and a diagonal matrix, as with non-negative elements).

 import numpy as np from scipy import linalg as LA def hess_R(x): d = len(x) H = np.ones(d*d).reshape(d,d) / (1 - np.sum(x))**2 H = H + np.diag(1 / (x**2)) return H.astype(np.float64) x = np.array([ 9.98510710e-02 , 9.00148922e-01 , 4.41547488e-10]) H = hess_R(x) w,v = LA.eigh(H) print w 

Eigenvalues ​​printed

 [ -6.74055241e-271 4.62855397e+016 5.15260753e+018] 

If I replace np.float64 with np.float32 in the return hess_R statement, I get

 [ -5.42905303e+10 4.62854925e+16 5.15260506e+18] 

so I guess this is some kind of accuracy problem.

Is there any way to fix this? Technically, I don't need to use eigh, but I think this is the main problem with my other errors (with the square roots of these matrices, getting NaNs, etc.).

+5
source share
1 answer

I think the problem is that you are within the limits of floating point precision. A good rule for linear algebra results is that they are good for about one part at 10 ^ 8 for float32 and about one part at 10 ^ 16 for float 64. It seems that the ratio of your smallest to largest eigenvalue is less than 10 ^ -16 . Because of this, the return value cannot be really reliable and will depend on the details of the implementation of your own values ​​that you use.

For example, there are four different solvers that you must have; take a look at their results:

 # using the 64-bit version for impl in [np.linalg.eig, np.linalg.eigh, LA.eig, LA.eigh]: w, v = impl(H) print(np.sort(w)) reconstructed = np.dot(v * w, v.conj().T) print("Allclose:", np.allclose(reconstructed, H), '\n') 

Output:

 [ -3.01441754e+02 4.62855397e+16 5.15260753e+18] Allclose: True [ 3.66099625e+02 4.62855397e+16 5.15260753e+18] Allclose: True [ -3.01441754e+02+0.j 4.62855397e+16+0.j 5.15260753e+18+0.j] Allclose: True [ 3.83999999e+02 4.62855397e+16 5.15260753e+18] Allclose: True 

Note that they all agree with the larger two eigenvalues, but the value of the smallest eigenvalue varies from implementation to implementation. However, in all four cases, the input matrix can be reconstructed with an accuracy of 64 bits: this means that the algorithms work as expected to their available accuracy.

0
source

Source: https://habr.com/ru/post/1247798/


All Articles