Fourier transform / iterative deconvolution in python using numpy / scipy

I would like to fit the fluorescent lifetime curve. These curves are defined by the convolution of the instrument response function (IRF accepted by Gaussian) and the exponential flow rate (multi):

formula

Where G is Gaussian and F-exponential decay. I tried installing this in python using lmfit.minimize, with the following functions:

def gaus(t, gamp, gwidth, gtoff): i = gamp*math.exp(-(t-gtoff)**2/gwidth) return i def exp1(t, expamp, exptime, exptoff): i = expamp*math.exp((t-exptoff)/(exptime)) return i def func1(t, gamp, gwidth, gtoff, expamp1, exptime1, exptoff1): i = [(scipy.integrate.quad(lambda Tpr: gaus((ti - Tpr), gamp, gwidth, gtoff)*exp1(ti, expamp1, exptime1, exptoff1), 0, ti))[0] for ti in t] return i 

In cases where gaus determines the response function by a Gaussian instrument, exp1 is the only exponential decay. func1 computes the convolution between them using scipy.integrate, the return values ​​are used to calculate the difference between the fit and the data based on a set of parameters:

 def fitfunc(params, x, data): ..getting parameters part here.. values = func1(t, gamp, gwidth, toff, expamp1, exptime1, toff) return values - data result = lmfit.minimize(fitfunc, fit_params, args = (t, test)) 

Despite such work, the fitting process is very slow and has not yet given me any reasonable approaches.

There are several approaches to circumventing the convolution process using Fourier transforms or iterative deconvolution. Origin seems to know how to do this, but it's hard for me to understand the procedures.

As far as I can tell, iterative deconvolution works by deconvolving a signal with a guessed Gaussian function, and then fitting the result, and then adjusting the Gaussian.

The Fourier transform approach will be based on the principle that convolution in real space corresponds to multiplication in the Fourier region, which will reduce the computation time. I assume that it will be the Fourier transform of the signal corresponding to this, and then the Fourier transform back.

I would like information on how to implement these methods in python numpy / scipy. Iterative deconvolution is perhaps the easiest task, so maybe I should start with this. On the other hand, from what I read, the Fourier method should be faster and more reliable. However, I do not know how to perform the fitting on the Fourier transformed result.

+4
source share
1 answer

Some comments and thoughts:

1) In the above implementation, gtoff and expoff are redundant. The same goes for gamp and expamp. Use, for example, expoff = 0 and expamp = 1.

2) If you are only after convolution of multi-exponential decays with Gaussian nuclei, use an analytical result that is easily and efficiently evaluated using scipy.special.erf: convolution exp (-t / tau) (t> = 0) with normalized Gaussian 1 / ( sqrt (2 * pi) * sigma) * exp (-t ^ 2 / (2 * sigma ^ 2)) is

1/2 * exp (- (2 * t * tau - s2) / (2 * tau ^ 2)) * (1 + erf ((2 * t * tau - s2) / (sqrt (2) * tau * sigma ))).

This obviously generalizes your definitions of exp1 and gaus.

3) You can effectively collapse vectors using np.convolve, which is based on the FFT method mentioned in your post. For your specific application, gausses should be presented in wrapping order. See Numericical Recipes, chapter 13.1 for details (otherwise your time will shift).

4). The usual approach is based on iterative re-convolution using a usually constant Gaussian kernel. Deconvolution, although possible, is wrong.

5) A universal time-resolution fluorescence data collection package has been developed that implements everything you need: trfit

+3
source

Source: https://habr.com/ru/post/1480126/


All Articles