Search for the stationary distribution of the Markov process taking into account the transition probability matrix

In Qaru, there are two threads related to this problem:

The above is simple but very expensive. If we have a transition matrix of order n , then at each iteration we calculate the multiplication of the matrix matrix by the cost O(n ^ 3) .

Is there a more efficient way to do this? One thing that arises for me is to use Eigen decomposition. The Markov matrix is ​​known for:

  • we are diagonalized in the complex domain: A = E * D * E^{-1} ;
  • have a real eigenvalue 1 and other (complex) eigenvalues ​​with a length less than one.

The stationary distribution is an eigenvector associated with the eigenvalue 1, i.e. first eigenvector.

Ok, the theory is good, but I can't get it to work. Taking the matrix P in the first related question:

 P <- structure(c(0, 0.1, 0, 0, 0, 0, 0, 0.1, 0.2, 0, 0, 0, 0, 0, 0.2, 0.3, 0, 0, 0.5, 0.4, 0.3, 0.5, 0.4, 0, 0, 0, 0, 0, 0.6, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0.6), .Dim = c(6L, 6L)) 

If I do this:

 Re(eigen(P)$vectors[, 1]) # [1] 0.4082483 0.4082483 0.4082483 0.4082483 0.4082483 0.4082483 

What's happening? According to previous questions, stationary distribution:

 # [1] 0.002590673 0.025906737 0.116580322 0.310880848 0.272020713 0.272020708 
+5
source share
2 answers

Your vector y = Re(eigen(P)$vectors[, 1]) not a distribution (since it does not contain one) and solves P'y = y , not x'P = x . The solution from your related Q&A approximately solves the latter:

 x = c(0.00259067357512953, 0.0259067357512953, 0.116580310880829, 0.310880829015544, 0.272020725388601, 0.272020725388601) all(abs(x %*% P - x) < 1e-10) # TRUE 

By rearranging P, you can use your approach based on eigenvalues:

 x2 = Re(eigen(t(P))$vectors[, 1]) x2 <- x2/sum(x2) (function(x) all(abs(x %*% P - x) < 1e-10))( x2 ) # TRUE 

In this case, the search for another stationary vector.

+2
source

Well, to use Eigen decomposition, we need to work with t(P) .

The definition of the transition probability matrix differs between probability / statistics and linear algebra. In statistics, all rows from P are summed with 1, and in linear algebra all columns from P are summed with 1. Therefore, instead of eigen(P) we need eigen(t(P)) :

 e <- Re(eigen(t(P))$vectors[, 1]) e / sum(e) # [1] 0.002590673 0.025906737 0.116580322 0.310880848 0.272020713 0.272020708 

As we see, we used only the first Eigen vector, i.e. eigenvector of greatest eigenvalue. Therefore, there is no need to calculate all eigenvalues ​​/ vectors using eigen . the power method can be used to search for the eigenvector of the largest eigenvalue. Let implement this in R:

 stydis1 <- function (A) { n <- dim(A)[1L] ## checking if (any(.rowSums(A, n, n) != 1)) stop (" 'A' is not a Markov matrix") ## implement power method e <- runif (n) oldnorm <- sqrt(c(crossprod(e))) repeat { e <- crossprod(A, e) newnorm <- sqrt(c(crossprod(e))) if (abs(newnorm / oldnorm - 1) < 1e-8) break e <- e / newnorm oldnorm <- newnorm } ## rescale `e` so that it sums up to 1 c(e / sum(e)) } stydis1 (P) # [1] 0.002590673 0.025906737 0.116580322 0.310880848 0.272020713 0.272020708 

And the result is correct.


In fact, we do not need to use Eigen decomposition. We can customize the method used in your second related question. There we took the matrix power, which was expensive, as you commented; but why not translate it into matrix vector multiplication?

 stydis2 <- function (A) { n <- dim(A)[1L] ## checking if (any(.rowSums(A, n, n) != 1)) stop (" 'A' is not a Markov matrix") ## direct computation b <- A[1, ] oldnorm <- sqrt(c(crossprod(b))) repeat { b <- crossprod(A, b) newnorm <- sqrt(c(crossprod(b))) if (abs(newnorm / oldnorm - 1) < 1e-8) break oldnorm <- newnorm } ## return stationary distribution c(b) } stydis2 (P) # [1] 0.002590673 0.025906737 0.116580322 0.310880848 0.272020713 0.272020708 

We start with an arbitrary initial distribution, say A[1, ] , and iteratively apply the transition matrix until the distribution converges. Again, the result is correct.

+5
source

Source: https://habr.com/ru/post/1262701/


All Articles