How to use iOS AudioUnit callback correctly

I am writing an iOS application that will play sound instructions as one of its functions.

Each time the application wants to play sound, it reads from a non-standard file and puts the received PCM data for this audio into a buffer in memory.

Despite the fact that I have this buffer with PCM data, I am having problems with the application of the application for playing sound. After searching for iOS documentation, I started implementing AudioUnit. The problem with this AudioUnit is to use the render callback (as far as I know, the only way to output sound). From the Apple Developer Documentation :

... callbacks have strict performance requirements that you must adhere to. The render callback lives in a real-time priority stream that subsequent render calls arrive asynchronously. The work that you do in the rendering callback body happens in this environment time limit. If your callback still creates sample frames in response to a previous render call on the next render call Arrives, you get a space in the sound. For this reason, you should not accept locks, memory allocation, access to the file system or network connections, or otherwise perform time-consuming tasks in the body rendering the callback function

If I cannot use locks inside the render callback method, I cannot read the buffer while writing to it. There is no way to read the file and write to the buffer, because the render callback will constantly access it.

The only example I found , actually created PCM data inside the rendering method, which I can not do.

Is this the only way to use AudioUnits (with asynchronous rendering callback)?

Is there an alternative to playing PCM data from memory?

+4
source share
1 answer

Using the RemoteIO Audio Unit may require a separate data queue (fifo or circular buffer) outside the audio device callback, which can preload enough audio data from the read file before calling the sound unit to call back, so that worse delays occur. Then, the rendering callback should only perform a quick copy of the audio data, and then update the flag for the record indicating that the audio data was used.

An alternative built into iOS is to use the audio queue API, which pre-buffers for you. This allows your application to populate several large sound buffers in advance in the main launch loop. You still need to pre-buffer enough data to ensure maximum file size, network, blocking, or other latencies.

Another strategy is to have alternative audio data to provide a real-time rendering callback if the file or network reading has not accelerated, for example, quickly creating an audio buffer that narrows to silence (and then does not narrow when the real data starts arriving again )

+4
source

Source: https://habr.com/ru/post/1382925/


All Articles