Agreeing with Chris Muller, I would use scipy , but scipy.optimize.curve_fit . The code looks like this:
defining your fitfunction def func(x, a, b, c): return a - b* np.exp(c * x) let us guess some start values initialGuess=[100, 100,-.01] guessedFactors=[func(x,*initialGuess ) for x in baskets] preparing data for showing the fit basketCont=np.linspace(min(baskets),max(baskets),50) fittedData=[func(x, *popt) for x in basketCont] the three sets of data to plot ax.plot(baskets,scaling_factor,linestyle='',marker='o', color='r',label="data") ax.plot(baskets,guessedFactors,linestyle='',marker='^', color='b',label="initial guess") ax.plot(basketCont,fittedData,linestyle='-', color='#900000',label="fit with ({0:0.2g},{1:0.2g},{2:0.2g})".format(*popt)) putting the covariance matrix nicely tab= [['{:.2g}'.format(j) for j in i] for i in pcov] the_table = plt.table(cellText=tab, colWidths = [0.2]*3, loc='upper right', bbox=[0.483, 0.35, 0.5, 0.25] ) plt.text(250,65,'covariance:',size=12) done
In the end, giving you: 
source share