Solution of a non-square linear system with R

How to solve a non-quadratic linear system with R: AX = B ?

(in the case when the system has no solution or infinitely many solutions)

Example:

 A=matrix(c(0,1,-2,3,5,-3,1,-2,5,-2,-1,1),3,4,T) B=matrix(c(-17,28,11),3,1,T) A [,1] [,2] [,3] [,4] [1,] 0 1 -2 3 [2,] 5 -3 1 -2 [3,] 5 -2 -1 1 B [,1] [1,] -17 [2,] 28 [3,] 11 
+6
source share
2 answers

If matrix A has more rows than columns, then you should use minimal squares.

If matrix A has fewer rows than columns, then you must decompose by unique values. Each algorithm does its best to give you a solution using assumptions.

Here's a link that shows how to use SVD as a solver:

http://www.ecse.rpi.edu/~qji/CV/svd_review.pdf

Let me apply it to your problem and see if it works:

Your input matrix A and the famous vector RHS B :

 > A=matrix(c(0,1,-2,3,5,-3,1,-2,5,-2,-1,1),3,4,T) > B=matrix(c(-17,28,11),3,1,T) > A [,1] [,2] [,3] [,4] [1,] 0 1 -2 3 [2,] 5 -3 1 -2 [3,] 5 -2 -1 1 > B [,1] [1,] -17 [2,] 28 [3,] 11 

Expand matrix A :

 > asvd = svd(A) > asvd $d [1] 8.007081e+00 4.459446e+00 4.022656e-16 $u [,1] [,2] [,3] [1,] -0.1295469 -0.8061540 0.5773503 [2,] 0.7629233 0.2908861 0.5773503 [3,] 0.6333764 -0.5152679 -0.5773503 $v [,1] [,2] [,3] [1,] 0.87191556 -0.2515803 -0.1764323 [2,] -0.46022634 -0.1453716 -0.4694190 [3,] 0.04853711 0.5423235 0.6394484 [4,] -0.15999723 -0.7883272 0.5827720 > adiag = diag(1/asvd$d) > adiag [,1] [,2] [,3] [1,] 0.1248895 0.0000000 0.00000e+00 [2,] 0.0000000 0.2242431 0.00000e+00 [3,] 0.0000000 0.0000000 2.48592e+15 

Here's the key: the third eigenvalue in d very small; on the contrary, the diagonal element in adiag very large. Before solving, set it to zero:

 > adiag[3,3] = 0 > adiag [,1] [,2] [,3] [1,] 0.1248895 0.0000000 0 [2,] 0.0000000 0.2242431 0 [3,] 0.0000000 0.0000000 0 

Now let's calculate the solution (see slide 16 in the link I gave you above):

 > solution = asvd$v %*% adiag %*% t(asvd$u) %*% B > solution [,1] [1,] 2.411765 [2,] -2.282353 [3,] 2.152941 [4,] -3.470588 

Now that we have the solution, replace it back to see if it gives us the same B :

 > check = A %*% solution > check [,1] [1,] -17 [2,] 28 [3,] 11 

What you started with B , so I think we are good.

Here's another nice discussion of AMS SVDs:

http://www.ams.org/samplings/feature-column/fcarc-svd

+8
source

The goal is to solve Ax = b

where A is p on q, x is q on 1, and b is p on 1 for x given A and b .

Approach 1: Generalized Inverse: Moore Penrose https://en.wikipedia.org/wiki/Generalized_inverse

Multiplying both sides of the equation, we obtain

A'Ax = A 'b

where A ' is the transposition of A. Note that A'A is q in q-matrix. One way to solve this issue is now to multiply both sides of the equation by the inverse of A'A . What gives,

x = (A'A) ^ {- 1} A 'b

This is a generalized converse theory. Here G = (A'A) ^ {- 1} A ' is a pseudoinverse of A.

 library(MASS) ginv(A) %*% B # [,1] #[1,] 2.411765 #[2,] -2.282353 #[3,] 2.152941 #[4,] -3.470588 

Approach 2: Generalized Reverse Using SVD.

@duffymo used SVD to get the pseudo-inverse of A.

However, the last elements of svd(A)$d may not be as small as in this example. Therefore, perhaps you cannot use this method as . Here is an example where none of the last elements of A are close to zero.

 A <- as.matrix(iris[11:13, -5]) A # Sepal.Length Sepal.Width Petal.Length Petal.Width # 11 5.4 3.7 1.5 0.2 # 12 4.8 3.4 1.6 0.2 # 13 4.8 3.0 1.4 0.1 svd(A)$d # [1] 10.7820526 0.2630862 0.1677126 

One option is to look like special values ​​in cor(A)

 svd(cor(A))$d # [1] 2.904194e+00 1.095806e+00 1.876146e-16 1.155796e-17 

Now it’s clear that there are only two large singular values. So now you can apply svd to A to get pseudo-inversion, as shown below.

 svda <- svd(A) G = svda$v[, 1:2] %*% diag(1/svda$d[1:2]) %*% t(svda$u[, 1:2]) # to get x G %*% B 
+1
source

Source: https://habr.com/ru/post/957325/


All Articles