I am working on a tool to compare two wave files for similarity in my waveforms. Ex, I have a wave file with a duration of 1 minute, and I make another wave file using the first, but every 5 seconds I recorded with an interval of 5 seconds to 0. Now my software will report that there is a signal difference in the time interval of 5 seconds up to 10 seconds, from 15 seconds to 20 seconds, from 25 seconds to 30 seconds and so on ...
At the moment, with the initial development, this works fine. Below are 3 test suites:
I have two wave files with a sampling frequency of 960 Hz, mono, without data as 138551 (arnd 1min 12sec files). I use a 128-point FFT (128 split sample file) and the results are good.
When I use the same algorithm on wave files with a sampling frequency of 48 kHz, 2-channel without data 6927361 for each channel (file arnd 2min 24 s), the process becomes too slow. When I use 4096 FFT points, the process is better.
But 4096 FFT points in 22050 Hz files, 2-channel with 55776 data samples for each channel (arnd 0.6sec file) give very poor results. In this case, a 128-point FFT gives a good result.
So, I am confused about how to determine the FFT length so that my results are good in every case.
I assume that the length should depend on the number of samples and the sampling rate. Please provide your details about this.
thanks
source share