IOS - play streaming (mp3) audio with effects

I am new to iOS audio technology.

I am developing an application that will play streaming audio (mp3), planning to add some effects, such as iPod Equalizer, Pan Control.

What is the best way to achieve this.

_ I tried using the Matt Gallagher AudioStreamer API ( http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html ). I was able to play streaming audio. but I was not sure how to add effects using AudioQueue _.

From the Apple Documentation, I realized that AudioUnit can be used to add effects. But the streaming format should be in Linear PCM.

Basically I want to add effects and play streaming audio.

Now I'm confused.

Can someone give a direction forward. Any help is much appreciated.

thanks

Sasikumar

+4
source share
2 answers

I think you should use AudioUnits permanently.

See how simple it is:

1) Create AudioUnits

// OUTPUT unit AudioComponentDescription iOUnitDescription; iOUnitDescription.componentType = kAudioUnitType_Output; iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO; iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; iOUnitDescription.componentFlags = 0; iOUnitDescription.componentFlagsMask = 0; // MIXER unit AudioComponentDescription MixerUnitDescription; MixerUnitDescription.componentType = kAudioUnitType_Mixer; MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer; MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; MixerUnitDescription.componentFlags = 0; MixerUnitDescription.componentFlagsMask = 0; // PLAYER unit AudioComponentDescription playerUnitDescription; playerUnitDescription.componentType = kAudioUnitType_Generator; playerUnitDescription.componentSubType = kAudioUnitSubType_AudioFilePlayer; playerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; // EQ unit AudioComponentDescription EQUnitDescription; EQUnitDescription.componentType = kAudioUnitType_Effect; EQUnitDescription.componentSubType = kAudioUnitSubType_AUiPodEQ; EQUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; EQUnitDescription.componentFlags = 0; EQUnitDescription.componentFlagsMask = 0; 

and etc.

2) Creating nodes

 //// //// EQ NODE //// err = AUGraphAddNode(processingGraph, &EQUnitDescription, &eqNode); if (err) { NSLog(@"eqNode err = %ld", err); } //// //// FX NODE //// err = AUGraphAddNode(processingGraph, &FXUnitDescription, &fxNode); if (err) { NSLog(@"fxNode err = %ld", err); } //// //// VFX NODE //// err = AUGraphAddNode(processingGraph, &VFXUnitDescription, &vfxNode); if (err) { NSLog(@"vfxNode err = %ld", err); } /// /// MIXER NODE /// err = AUGraphAddNode (processingGraph, &MixerUnitDescription, &mixerNode ); if (err) { NSLog(@"mixerNode err = %ld", err); } /// /// OUTPUT NODE /// err = AUGraphAddNode(processingGraph, &iOUnitDescription, &ioNode); if (err) { NSLog(@"outputNode err = %ld", err); } //// /// PLAYER NODE /// err = AUGraphAddNode(processingGraph, &playerUnitDescription, &audioPlayerNode); if (err) { NSLog(@"audioPlayerNode err = %ld", err); } 

3) Connect them

 //// mic /lineIn ----> vfx bus 0 err = AUGraphConnectNodeInput(processingGraph, ioNode, 1, vfxNode, 0); if (err) { NSLog(@"vfxNode err = %ld", err); } //// vfx ----> mixer err = AUGraphConnectNodeInput(processingGraph, vfxNode, 0, mixerNode, micBus ); if (err) { NSLog(@"vfxNode err = %ld", err); } //// player ----> fx err = AUGraphConnectNodeInput(processingGraph, audioPlayerNode, 0, fxNode, 0); if (err) { NSLog(@"audioPlayerNode err = %ld", err); } //// fx ----> mixer err = AUGraphConnectNodeInput(processingGraph, fxNode, 0, mixerNode, filePlayerBus); if (err) { NSLog(@"audioPlayerNode err = %ld", err); } ///// mixer ----> eq err = AUGraphConnectNodeInput(processingGraph, mixerNode, 0, eqNode, 0); if (err) { NSLog(@"mixerNode err = %ld", err); } //// eq ----> output err = AUGraphConnectNodeInput(processingGraph, eqNode, 0, ioNode, 0); if (err) { NSLog(@"eqNode err = %ld", err); } 

4) Perform render callbacks

  // let say a mic input callback AURenderCallbackStruct lineInrCallbackStruct = {}; lineInrCallbackStruct.inputProc = &micLineInCallback; lineInrCallbackStruct.inputProcRefCon = (void*)self; err = AudioUnitSetProperty( vfxUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &lineInrCallbackStruct, sizeof(lineInrCallbackStruct)); 

5) Handling sound buffers in a callback

 static OSStatus micLineInCallback (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { MixerHostAudio *THIS = (MixerHostAudio *)inRefCon; AudioUnit rioUnit = THIS.ioUnit; // io unit which has the input data from mic/lineIn OSStatus renderErr; OSStatus err; UInt32 bus1 = 1; // input bus int i; renderErr = AudioUnitRender( rioUnit, ioActionFlags, inTimeStamp, bus1, inNumberFrames, ioData); //// do something with iOData like getting left and right channels AudioUnitSampleType *inSamplesLeft; // convenience pointers to sample data AudioUnitSampleType *inSamplesRight; int isStereo; // c boolean - for deciding how many channels to process. int numberOfChannels; // 1 = mono, 2= stereo // Sint16 buffers to hold sample data after conversion SInt16 *sampleBufferLeft = THIS.conversionBufferLeft; SInt16 *sampleBufferRight = THIS.conversionBufferRight; SInt16 *sampleBuffer; // start the actual processing numberOfChannels = THIS.displayNumberOfInputChannels; isStereo = numberOfChannels > 1 ? 1 : 0; // decide stereo or mono // copy all the input samples to the callback buffer - after this point we could bail and have a pass through renderErr = AudioUnitRender(rioUnit, ioActionFlags, inTimeStamp, bus1, inNumberFrames, ioData); if (renderErr < 0) { return renderErr; } inSamplesLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData; // left channel fixedPointToSInt16(inSamplesLeft, sampleBufferLeft, inNumberFrames); if(isStereo) { inSamplesRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData; // right channel fixedPointToSInt16(inSamplesRight, sampleBufferRight, inNumberFrames); } 

I found out about this while researching great Apple docs like

Apple Audio MixerHost App

Apple Audio Programming Guide

AudioGraph is the most comprehensive example of code / "unofficial" documentation that you can have in the real world for programming AudioUnit.

Hope this help, good luck!

+4
source

Check out the PureData audio process - libpd is the ios version for it

+1
source

Source: https://habr.com/ru/post/1345931/


All Articles