np.polynomial.polynomial.polyval is a great (and convenient) approach to efficiently evaluating polynomial fittings.
However, if the “fastest” is what you are looking for, simply building polynomial inputs and using the multiplication functions on the elementary matrix matrix leads to slightly faster (about 4 times faster) computational speeds.
Customization
Using the same setting as above, we will create 25 different fittings.
>>> num_samples = 100000 >>> num_lines = 100 >>> x = np.random.randint(0,100,num_samples) >>> y = np.random.randint(0,100,(num_samples, num_lines)) >>> fit = np.polyfit(x,y,deg=2) >>> xx = np.random.randint(0,100,num_samples*10)
Numpy polyval Function
res1 = np.polynomial.polynomial.polyval(xx, fit)
Averaging the main matrices
inputs = np.array([np.power(xx,d) for d in range(len(fit))]) res2 = fit.T.dot(inputs)
Terms of performance of functions
Using the same options above ...
%timeit _ = np.polynomial.polynomial.polyval(xx, fit) 1 loop, best of 3: 247 ms per loop %timeit inputs = np.array([np.power(xx, d) for d in range(len(fit))]);_ = fit.T.dot(inputs) 10 loops, best of 3: 72.8 ms per loop
Beat the dead horse ...

GPA Efficiency ~ 3.61x faster. Speed fluctuations probably come from random computer processes in the background.
source share