I am trying to implement a simple gap removal algorithm using MTAudioProcessingTap.
In my process() function, I call MTAudioProcessingTapGetSourceAudio() to extract the sound. However, if I remove the space, I need to get more sound to fill the output buffer, and calling GetSourceAudio() again seems to give me exactly the same input sound.
If I return fewer numberFrames from process() , iOS fills the blanks with silence, which is not suitable for my application - I want to speed up the time.
If I extract fewer numberFrames frames from GetSourceAudio() , it only returns the first n frames - never later.
So:
- Is there a way to get MTAudioProcessingTap to skip time? Is this really a one-in-one deal?
- If you cannot request additional data from
GetSourceAudio() , and not receive all the audio signals, this will lead to gaps in your output, what would you need to specify a certain number of frames? Also, why do we even need to call GetSourceAudio() if the parameters should always be exactly the same as specified in the process() arguments?
September 30th Update: I switched to TheAmazingAudioEngine, which will happily provide me with as much sound as I need. However, I'm still puzzled by the design of the MTAudioProcessingTap.
source share