Computing filter (b, a, x, zi) using FFT

I would like to try to calculate y=filter(b,a,x,zi)and dy[i]/dx[j]using FFT, and not in the time domain for possible acceleration in the implementation of the GPU.

I am not sure if this is possible, especially when it ziis nonzero. I looked at how scipy.signal.lfilterscipy and filteroctave are implemented. They are executed directly in the time domain, and scipy uses direct form 2 and an octave of direct form 1 (from viewing the code in DLD-FUNCTIONS/filter.cc). I have never seen an FFT implementation similar fftfiltto FIR filters in MATLAB (ie A = [1.]).

I tried to do y = ifft(fft(b) / fft(a) * fft(x)), but it seems conceptually wrong. Also, I'm not sure how to handle the initial transition period zi. Any references pointing to an existing implementation will be appreciated.

Code example

import numpy as np
import scipy.signal as sg
import matplotlib.pyplot as plt

# create an IRR lowpass filter
N = 5
b, a = sg.butter(N, .4)
MN = max(len(a), len(b))

# create a random signal to be filtered
T = 100
P = T + MN - 1
x = np.random.randn(T)
zi = np.zeros(MN-1)

# time domain filter
ylf, zo = sg.lfilter(b, a, x, zi=zi)

# frequency domain filter
af = sg.fft(a, P)
bf = sg.fft(b, P)
xf = sg.fft(x, P)
yfft = np.real(sg.ifft(bf/af * xf))[:T]

# error
print np.linalg.norm(yfft - ylf)

# plot, note error is larger at beginning and with larger N
plt.figure(1)
plt.clf()
plt.plot(ylf)
plt.plot(yfft)
+3
source share
3 answers

You can reduce the error in your existing implementation by replacing P = T + MN - 1with P = T + 2*MN - 1. This is purely intuitive, but it seems to me that for the separation bfand afrequired terms 2*MNfor bypass.

C.S. Burrus , , FIR IIR, - , . , , , IIR , .

+2

, , sedit.py frequency.py http://jc.unternet.net/src/ , -.

+1

scipy.signal.lfiltic(b, a, y, x=None) .

Doc lfiltic:

Given a linear filter (b,a) and initial conditions on the output y
and the input x, return the inital conditions on the state vector zi
which is used by lfilter to generate the output given the input.

If M=len(b)-1 and N=len(a)-1.  Then, the initial conditions are given
in the vectors x and y as

x = {x[-1],x[-2],...,x[-M]}
y = {y[-1],y[-2],...,y[-N]}

If x is not given, its inital conditions are assumed zero.
If either vector is too short, then zeros are added
  to achieve the proper length.

The output vector zi contains

zi = {z_0[-1], z_1[-1], ..., z_K-1[-1]}  where K=max(M,N).
+1

Source: https://habr.com/ru/post/1779801/


All Articles