I am writing an iOS application that will play sound instructions as one of its functions.
Each time the application wants to play sound, it reads from a non-standard file and puts the received PCM data for this audio into a buffer in memory.
Despite the fact that I have this buffer with PCM data, I am having problems with the application of the application for playing sound. After searching for iOS documentation, I started implementing AudioUnit. The problem with this AudioUnit is to use the render callback (as far as I know, the only way to output sound). From the Apple Developer Documentation :
... callbacks have strict performance requirements that you must adhere to. The render callback lives in a real-time priority stream that subsequent render calls arrive asynchronously. The work that you do in the rendering callback body happens in this environment time limit. If your callback still creates sample frames in response to a previous render call on the next render call Arrives, you get a space in the sound. For this reason, you should not accept locks, memory allocation, access to the file system or network connections, or otherwise perform time-consuming tasks in the body rendering the callback function
If I cannot use locks inside the render callback method, I cannot read the buffer while writing to it. There is no way to read the file and write to the buffer, because the render callback will constantly access it.
The only example I found , actually created PCM data inside the rendering method, which I can not do.
Is this the only way to use AudioUnits (with asynchronous rendering callback)?
Is there an alternative to playing PCM data from memory?
source share