I have a matrix A and a right vector y , expressed in terms of fractions.Fraction objects:
import random, fractions, numpy as np A = np.zeros((3, 3), dtype=fractions.Fraction) y = np.zeros((3, 1), dtype=fractions.Fraction) for i in range(3): for j in range(3): A[i, j] = fractions.Fraction(np.random.randint(0, 4), np.random.randint(1, 6)) y[i] = fractions.Fraction(np.random.randint(0, 4), np.random.randint(1, 6))
I would like to solve the system A*x = y using the provided functions in numpy and get the result expressed in fractions objects, but, unfortunately, the base x = np.linalg.solve(A, y) returns the result in standard floating-point values comma:
>>> np.linalg.solve(A, y) array([[-1.5245283 ], [ 2.36603774], [ 0.56352201]])
Is there a way to get an accurate result with fraction objects?
EDIT
What I would like to do is simply not feasible with numpy built-in functions (since version 1.10 - see Mad Physicist answer). What can be done is to implement your own linear solver based on Gaussian exclusion, which is based on sum, subtraction, multiplication and division, all of which are clearly defined and performed exactly with fraction objects (as long as the numerators and denominators are suitable in the data type, which I think is arbitrarily long).
If you are really interested in this, just implement the solver yourself, it will be easy and quick to do (follow one of the many guides on the Internet). This does not interest me, so I will stick with the floating point result.