Numerically stable inverse 2x2 matrix

In the numerical solver I'm working on in C, I need to invert a 2x2 matrix, and then it will be multiplied from the right side by another matrix:

C = B . inv(A) 

I used the following definition of an inverted 2x2 matrix:

 a = A[0][0]; b = A[0][1]; c = A[1][0]; d = A[1][1]; invA[0][0] = d/(a*db*c); invA[0][1] = -b/(a*db*c); invA[1][0] = -c/(a*db*c); invA[1][1] = a/(a*db*c); 

In the first few iterations of my solver, this seems to give the correct answers, however, after a few steps things start to grow and eventually explode.

Now, compared to an implementation using SciPy, I have found that the same math does not explode. The only difference I can find is that SciPy code uses scipy.linalg.inv() , which internally uses LAPACK to perform the inverse.

When I replace the inv() call with inv() the above calculations, the Python version explodes, so I'm sure this is the problem. Small differences in the calculations creep in, which makes me think that this is a numerical problem - not entirely unexpected for the inversion operation.

I use double precision floats (64-bit), hoping that numerical problems will not be a problem, but apparently this is not the case.

But: I would like to solve this in my C code without having to access a library such as LAPACK, because the whole reason to port it to pure C is to make it work on the target system. Moreover, I would like to understand the problem, and not just call the black box. In the end, I would like it to work with the same precision, if possible.

So my question is that for such a small matrix there is a numerically more stable way to calculate the inversion of A?

Thanks.

Edit: I'm currently trying to figure out if I can avoid the inverse by deciding for C

+4
source share
5 answers

Do not invert the matrix. Almost always, what you use to do the opposite can be done faster and more accurately without inverting the matrix. Matrix inversion is inherently unstable, and mixing with floating point numbers causes problems.

Say C = B . inv(A) C = B . inv(A) is the same as saying that you want to solve AC = B for C. You can accomplish this by dividing each B and C into two columns. The solution A C1 = B1 and A C2 = B2 will produce C.

+5
source

Your code is OK; however, he risks losing accuracy from any of the four subtractions.

Consider using more advanced methods, such as matfunc.py . This code performs the inversion using a QR decomposition implemented using Homeowner Reflection . The result will be improved by iterative refinement .

+5
source

The calculation of the determinant is unstable. It is best to use a Gauss-Jordan with a partial swivel mechanism so that you can clearly work here.

2x2 system solution

Solve the system (use c, f = 1, 0, then c, f = 0, 1 to get the opposite)

 a * x + b * y = c d * x + e * y = f 

In pseudo code, it reads

 if a == 0 and d == 0 then "singular" if abs(a) >= abs(d): alpha <- d / a beta <- e - b * alpha if beta == 0 then "singular" gamma <- f - c * alpha y <- gamma / beta x <- (c - b * y) / a else swap((a, b, c), (d, e, f)) restart 

It is more stable than determinant + comatrix ( beta - determinant * some constant calculated in a stable way). You can work out the full summary equivalent (i.e., potentially replace x and y, so that the first division by a is such that a is the largest number among a, b, d, e) and this may be more stable for some circumstances, but the above method works well for me.

This is equivalent to doing an LU decomposition (store gamma, beta, a, b, c if you want to keep this decomposition of LU).

Computing a QR decomposition can also be done explicitly (and also very stable if you do it right), but it is slower (and involves getting square roots). The choice is yours.

Improved accuracy

If you need higher accuracy (the above method is stable, but there is some rounding error proportional to the ratio of the eigenvalues), you can "solve for correction".

Indeed, suppose you solved A * x = b for x using the above method. You now calculate A * x , and you find that it is not exactly equal to b , that there is a small error:

 A * x - b = db 

Now, if you decide for dx in A * dx = db , you have

 A * (x - dx) = b + db - db - ddb = b - ddb 

where ddb is the error caused by the numerical solution A * dx = db , which is usually much less than db (since db much less than b ).

You can iterate over the above procedure, but it usually takes one step to restore full machine accuracy.

+3
source

Use the Jacobi method, which is an iterative method that involves “inverting” only the main diagonal A, which is very simple and less prone to numerical instability than inverting the entire matrix.

+1
source

I agree with Jean-Vicotr that you should probably use the Jacobbian method. Here is my example:

 #Helper functions: def check_zeros(A,I,row, col=0): """ returns recursively the next non zero matrix row A[i] """ if A[row, col] != 0: return row else: if row+1 == len(A): return "The Determinant is Zero" return check_zeros(A,I,row+1, col) def swap_rows(M,I,row,index): """ swaps two rows in a matrix """ swap = M[row].copy() M[row], M[index] = M[index], swap swap = I[row].copy() I[row], I[index] = I[index], swap # Your Matrix M M = np.array([[0,1,5,2],[0,4,9,23],[5,4,3,5],[2,3,1,5]], dtype=float) I = np.identity(len(M)) M_copy = M.copy() rows = len(M) for i in range(rows): index =check_zeros(M,I,i,i) while index>i: swap_rows(M, I, i, index) print "swaped" index =check_zeros(M,I,i,i) I[i]=I[i]/M[i,i] M[i]=M[i]/M[i,i] for j in range(rows): if j !=i: I[j] = I[j] - I[i]*M[j,i] M[j] = M[j] - M[i]*M[j,i] print M print I #The Inverse Matrix 
+1
source

Source: https://habr.com/ru/post/1388760/


All Articles