Format AudioTimeStamp + 'MusicDeviceMIDIEvent'

Can I help a little with this?

In the test project, I have AUSampler -> MixerUnit -> ioUnit and the render callback is configured. Everything works. I use the MusicDeviceMIDIEvent method, as defined in MusicDevice.h , to play midi noteOn and noteOff. Thus, in the code below for hacking, noteOn occurs within 0.5 sec. every 2 seconds.

MusicDeviceMIDIEvent (below) takes a parameter: inOffsetSampleFrame to schedule an event in the future. What I would like to do is play noteOn and schedule noteOff at the same time (without checking the hack time, which I do below). I just don't understand what the value of inOffsetSampleFrame should be (for example: to play .5 seconds or .2 seconds notes (in other words, I don't understand the basics of audio synchronization ...).

So, if someone could go through arithmetic to get the correct values ​​from the incoming AudioTimeStamp , that would be great! Also, possibly correct me / specify any of them:

  • AudioTimeStamp->mSampleTime - sampleTime - time current sample "slice"? Is it in milliseconds?

  • AudioTimeStamp->mHostTime -? host - the computer on which the application is running, and this time (in milliseconds?) from the moment the computer starts? This is HUGE . Doesn't it tip over and then cause problems?

  • inNumberFrames - this seems to be 512 on iOS5 (installed by kAudioUnitProperty_MaximumFramesPerSlice ). So the sample is made up to 512 frames?

  • I saw a lot of warnings so as not to overload the function callback - in particular, to avoid Objective-C calls - I understand the reason, but how then to communicate the user interface or do other processing?

I guess so. Thank you for agreeing with me!

inOffsetSampleFrame If you are planning a MIDI event from an audio unit rendering stream, you can provide a sample offset that can be applied by the audio device when applying this event in its next audio rendering. This allows you to schedule sampling, the time at which the MIDI command is applied, and, in particular, is important when starting new notes. If you do not plan on rendering the audio unit in the stream, then you must set this value to 0

// Function MusicDeviceMIDIEvent def:

 extern OSStatus MusicDeviceMIDIEvent( MusicDeviceComponent inUnit, UInt32 inStatus, UInt32 inData1, UInt32 inData2, UInt32 inOffsetSampleFrame) 

// my callback

 OSStatus MyCallback( void * inRefCon, AudioUnitRenderActionFlags * ioActionFlags, const AudioTimeStamp * inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList * ioData) { Float64 sampleTime = inTimeStamp->mSampleTime; UInt64 hostTime = inTimeStamp->mHostTime; [(__bridge Audio*)inRefCon audioEvent:sampleTime andHostTime:hostTime]; return 1; } 

// OBJ-C Method

 - (void)audioEvent:(Float64) sampleTime andHostTime:(UInt64)hostTime { OSStatus result = noErr; Float64 nowTime = (sampleTime/self.graphSampleRate); // sample rate: 44100.0 if (nowTime - lastTime > 2) { UInt32 noteCommand = kMIDIMessage_NoteOn << 4 | 0; result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 120, 0); lastTime = sampleTime/self.graphSampleRate; } if (nowTime - lastTime > .5) { UInt32 noteCommand = kMIDIMessage_NoteOff << 4 | 0; result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 0, 0); } } 
+4
source share
1 answer

The answer here is that I misunderstood the purpose of inOffsetSampleFrame , despite the fact that it was precisely named. I thought I could use it to schedule a noteOff event at some time in the future, so I didn't need to control noteOffs, but the scope of this object is just within the current selection frame. Oh good.

+2
source

Source: https://habr.com/ru/post/1399617/


All Articles