I am trying to fit the sum of Gaussians using scikit-learn , because scikit-learn GaussianMixture seems much more reliable than using curve_fit.
Problem : it is not very suitable for setting the truncated part of even one Gaussian peak:
from sklearn import mixture import matplotlib.pyplot import matplotlib.mlab import numpy as np clf = mixture.GaussianMixture(n_components=1, covariance_type='full') data = np.random.randn(10000) data = [[x] for x in data] clf.fit(data) data = [item for sublist in data for item in sublist] rangeMin = int(np.floor(np.min(data))) rangeMax = int(np.ceil(np.max(data))) h = matplotlib.pyplot.hist(data, range=(rangeMin, rangeMax), normed=True); plt.plot(np.linspace(rangeMin, rangeMax), mlab.normpdf(np.linspace(rangeMin, rangeMax), clf.means_, np.sqrt(clf.covariances_[0]))[0])
gives now changing data = [[x] for x in data]
to data = [[x] for x in data if x <0]
to truncate the distribution return Any ideas how to truncate truncation properly?
Note The distribution is not necessarily truncated in the middle; there may be something between 50% and 100% of the remaining full distribution.
I would also be happy if someone could point me to alternative packages. I only tried curve_fit, but could not get it to do anything useful once more than two peaks were involved.
source share