Indeed, refining the pi value does not solve your problem, even if it improves the accuracy of the 20th derivative. The problem is that small errors accumulate by repeating the filter derivatives. To limit this problem, we can propose the introduction of a low-pass filter, as well as the use of four-dimensional accuracy. Presentation of the concept of condition number helps to understand the method of error overestimation and, accordingly, to establish a filter. However, calculating the 20th derivative will remain a nightmare, because calculating the 20th derivative is poorly conditioned : it is simply impossible, even for the purest experimental inputs ...
1. Minor errors always exist.
A derivative filter, also called a ramp filter, pumps high frequencies more than small ones. The slightest error at high frequencies is greatly inflated, repeating the use of a ramp filter.
Look for small initial errors by typing frequencies on
printf("---- out ---- data '(%d) %d = %20g %20g \n", t,i, creal(out[i]), cimag(out[i]) );
As pi=3.14159265359 , you get:
---- out ---- data '(0) 0 = -2.06712e-13 0 ---- out ---- data '(0) 1 = 6.20699e-13 -4 ---- out ---- data '(0) 2 = -2.06823e-13 2.92322e-13 ---- out ---- data '(0) 3 = -2.07053e-13 1.03695e-13 ---- out ---- data '(0) 4 = -2.06934e-13 0 ---- out ---- data '(0) 5 = -2.07053e-13 -1.03695e-13 ---- out ---- data '(0) 6 = -2.06823e-13 -2.92322e-13 ---- out ---- data '(0) 7 = 6.20699e-13 4
Due to the gap caused by the missing digits pi, there are small nonzero values for all frequencies, and these values accumulate, taking the derivative.
As pi=M_PI these initial errors are smaller, but still non-zero:
---- out ---- data '(0) 0 = 1.14424e-17 0 ---- out ---- data '(0) 1 = -4.36483e-16 -4 ---- out ---- data '(0) 2 = 1.22465e-16 -1.11022e-16 ---- out ---- data '(0) 3 = 1.91554e-16 -4.44089e-16 ---- out ---- data '(0) 4 = 2.33487e-16 0 ---- out ---- data '(0) 5 = 1.91554e-16 4.44089e-16 ---- out ---- data '(0) 6 = 1.22465e-16 1.11022e-16 ---- out ---- data '(0) 7 = -4.36483e-16 4
These small errors accumulate in the same way as the previous ones, and the problem is not completely resolved. Let's try to reset these frequencies during the first cycle:
if(t==0){ for (k = 0; k < nx; k++){ if (k==1 || nx-k==1){ out[k] = I*4.0; }else{ out[k] =0.0; } } }
This time, the only nonzero frequencies in the first t=0 cycle are the correct ones. Let's look at the second cycle:
---- out ---- data '(1) 0 = 0 0 ---- out ---- data '(1) 1 = -4 0 ---- out ---- data '(1) 2 = 0 0 ---- out ---- data '(1) 3 = -4.44089e-16 0 ---- out ---- data '(1) 4 = 0 0 ---- out ---- data '(1) 5 = 4.44089e-16 0 ---- out ---- data '(1) 6 = 0 0 ---- out ---- data '(1) 7 = 4 0
Due to calculations with finite accuracy during conversion and scaling back / forward DFT, small errors appear and accumulate. AGAIN.
2. To limit the growth of errors, you can enter filtering.
Most of the experimental inputs are processed by large high-frequency noise, which can be reduced by applying a low-pass filter such as a Butterworth filter. See https://www.hindawi.com/journals/ijbi/2011/693795/ for more details. This filter is characterized by a cutting frequency kc and an exponent, and the frequency response of the ramp filter changes as follows:
//parameters of Butterworth Filter: double kc=3; double n=16; // only need it once, hence outside the loop for (k = 0; k < nx; k++){ if (k < nx/2){ // add low pass filter: kx[k] = I*2*pi*k/xf; kx[k]*=1./(1.+pow(k/kc,n)); } else if (k > nx/2){ kx[k] = I*2*pi*(k-nx)/xf; kx[k]*=1./(1.+pow((nx-k)/kc,n)); } else if (k == nx/2){ kx[k] = 0.0; } }
Using these parameters, the error 20 of the derivative decreases from 5.27e-7 - 1.22e-12.
Another improvement is possible if you do not return to the real space between the derivatives. Thus, avoid rounding errors in floating point calculations. In this particular case, zeroing the input frequencies ensures that the error remains empty, but it is a bit artificial ... From a practical point of view, if the input signal is provided in real space, using a filter to calculate the derivatives is almost necessary.
3. The error grows due to the condition number of the derived filter
The derivative is a linear application and is characterized by a condition number . Let's say that the input signal suffers from an eps error at all frequencies. If the first frequency is amplified by the factor alpha , the frequency k amplified in the factor k*alpha . Therefore, each time a derivative is applied, the signal-to-noise ratio is divided by the ratio kc (highest frequency), the number of the named state. If the filter is repeated 20 times, the signal-to-noise ratio is divided by kc ^ 20.
The double precision number around eps=10e-14 accurate: this is the best signal to noise ratio you can get! Most experimental materials will be much worse. For example, a grayscale image is often selected using 16 bits = 65536 gray levels. As a result, an image with a gray scale of no more than eps=1/65536 accurate. Similarly, a typical sound bit depth is 24, which corresponds to eps = 6e-8. Square accuracy can be recommended for almost analytical inputs, the accuracy of which depends on esp = 1e-34 ... Let it find a frequency such that kc ^ 20 * eps <1:
eps=10e-14 kc=5 eps=1/65536 kc=1 eps=1/2^24 kc=2 esp=1e-34 kc=44
Therefore, if the input signal is double precision, in the best case only 4 frequencies of the 20th derivative will matter ... All frequencies above the 4th should be filtered by a strong low-frequency signal, filter. Therefore, it is recommended to use four-dimensional precision: see the fftw documentation for compiling fftw for the gcc quad __float128 precision type associated with quadmath.If the input is an image, the calculation of the 20th derivative is simply beyond the scope: none of the frequencies will ever be significant!