If capturing input from AUGraph
is a task, the critical part of the code (more or less) comes down to this simplest single-channel demo:
OSStatus MyRenderProc(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList * ioData) { Float32 buf [inNumberFrames];
In setupAUGraph()
this callback can be configured as follows:
void setupAUGraph(MyMIDIPlayer *player) {
Note that the render callback βpicks upβ the connection between the output of the node device and the input of the node output, capturing what comes from the upstream. The callback simply copies ioData
to another buffer that can be saved. AFAIK, this is the easiest way to access ioData
, which, as I know, works without breaking the API.
Also pay attention to very effective simple methods for testing, if this works for a specific implementation - there is no need for Objective-C methods inside the callback. Merging with some NSArray
s, adding objects, etc. Inside a real-time callback, C introduces the risk of priority problems, which can later become difficult to debug. The CoreAudio API is written at plain-C . At the heart of Obj-C execution, a lot of things do not happen in the real-time stream without risking glitches (locks, memory management, etc.). Thus, it would be safer to keep Obj-C in the real-time stream .
Hope this helps.
source share