The calculation of the determinant is unstable. It is best to use a Gauss-Jordan with a partial swivel mechanism so that you can clearly work here.
2x2 system solution
Solve the system (use c, f = 1, 0, then c, f = 0, 1 to get the opposite)
a * x + b * y = c d * x + e * y = f
In pseudo code, it reads
if a == 0 and d == 0 then "singular" if abs(a) >= abs(d): alpha <- d / a beta <- e - b * alpha if beta == 0 then "singular" gamma <- f - c * alpha y <- gamma / beta x <- (c - b * y) / a else swap((a, b, c), (d, e, f)) restart
It is more stable than determinant + comatrix ( beta - determinant * some constant calculated in a stable way). You can work out the full summary equivalent (i.e., potentially replace x and y, so that the first division by a is such that a is the largest number among a, b, d, e) and this may be more stable for some circumstances, but the above method works well for me.
This is equivalent to doing an LU decomposition (store gamma, beta, a, b, c if you want to keep this decomposition of LU).
Computing a QR decomposition can also be done explicitly (and also very stable if you do it right), but it is slower (and involves getting square roots). The choice is yours.
Improved accuracy
If you need higher accuracy (the above method is stable, but there is some rounding error proportional to the ratio of the eigenvalues), you can "solve for correction".
Indeed, suppose you solved A * x = b for x using the above method. You now calculate A * x , and you find that it is not exactly equal to b , that there is a small error:
A * x - b = db
Now, if you decide for dx in A * dx = db , you have
A * (x - dx) = b + db - db - ddb = b - ddb
where ddb is the error caused by the numerical solution A * dx = db , which is usually much less than db (since db much less than b ).
You can iterate over the above procedure, but it usually takes one step to restore full machine accuracy.