I saw this question, and this has to do with my attempt to compute the dominant eigenvector in Python with numPy.
I am trying to compute the dominant eigenvector of the matrix nxn without getting up too much heavy linear algebra. I did a superficial study of determinants, eigenvalues, eigenvectors, and characteristic polynomials, but I would rather rely on the numPy implementation to find eigenvalues, since I believe it is more efficient than my own.
The problem I ran into was that I used this code:
markov = array([[0.8,0.2],[.1,.9]]) print eig(markov)
... as a test, and got this result:
(array([ 0.7, 1. ]), array([[-0.89442719, -0.70710678], [ 0.4472136 , -0.70710678]]))
For me, itβs that, according to the Perron-Frobenius theorem, all components of the second eigenvector must be positive (since, according to Wikipedia, βa real square matrix with positive elements has the single largest real eigenvalue and that the corresponding eigenvector has strictly positive Components" ).
Does anyone know what is going on here? Is numPy insignificant? Did I find a mismatch in ZFC? Or is it just me being noob in linear algebra, Python, numPy or some combination of the three?
Thanks for any help you can provide. Also, this is my first SO question (I was active on cstheory.se, though), so any advice on improving the clarity of my question would also be appreciated.