Play audio in iphone related to frequency and decibels

I explored how to play an audio signal on an iphone, related to the frequency and decibels I gave.

Links I talked about:

http://developer.apple.com/library/ios/#samplecode/MusicCube/Introduction/Intro.html#//apple_ref/doc/uid/DTS40008978

http://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/

http://atastypixel.com/blog/using-remoteio-audio-unit/

http://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/

How to reproduce sound from a frequency and a frame in a pacicle not found AudioUnit

I also used Flite to write text to speech in my application.

Can I find out if it is possible to play an audio signal on an iphone related to frequency and decibels using flirting.

I know that they create an audio file in accordance with the input (only taking into account the pitch, dispersion, speed and the specified string) and Audioplayer playing through it after creation.

But they do not have custom methods for setting the frequency and decibels !!!!

So can anyone provide me a good way to do this on iphone.

Any help on this is appreciated.

thanks

+6
source share
1 answer

This class allows you to play an audio signal at a given frequency and with a given amplitude. It uses AudioQueues from AudioToolbox.framework. This is just a sketch, much needs to be clarified, but the signal creation mechanism works.

Using is pretty simple if you see @interface .

 #import <AudioToolbox/AudioToolbox.h> #define TONE_SAMPLERATE 44100. @interface Tone : NSObject { AudioQueueRef queue; AudioQueueBufferRef buffer; BOOL rebuildBuffer; } @property (nonatomic, assign) NSUInteger frequency; @property (nonatomic, assign) CGFloat dB; - (void)play; - (void)pause; @end @implementation Tone @synthesize dB=_dB,frequency=_frequency; void handleBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer); #pragma mark - Initialization and deallocation - - (id)init { if ((self=[super init])) { _dB=0.; _frequency=440; rebuildBuffer=YES; // TO DO: handle AudioQueueXYZ failures!! // create a descriptor containing a LPCM, mono, float format AudioStreamBasicDescription desc; desc.mSampleRate=TONE_SAMPLERATE; desc.mFormatID=kAudioFormatLinearPCM; desc.mFormatFlags=kLinearPCMFormatFlagIsFloat; desc.mBytesPerPacket=sizeof(float); desc.mFramesPerPacket=1; desc.mBytesPerFrame=sizeof(float); desc.mChannelsPerFrame=1; desc.mBitsPerChannel=8*sizeof(float); // create a new queue AudioQueueNewOutput(&desc, &handleBuffer, self, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &queue); // and its buffer, ready to hold 1" of data AudioQueueAllocateBuffer(queue, sizeof(float)*TONE_SAMPLERATE, &buffer); // create the buffer and enqueue it handleBuffer(self, queue, buffer); } return self; } - (void)dealloc { AudioQueueStop(queue, YES); AudioQueueFreeBuffer(queue, buffer); AudioQueueDispose(queue, YES); [super dealloc]; } #pragma mark - Main function - void handleBuffer(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) { // this function takes care of building the buffer and enqueuing it. // cast inUserData type to Tone Tone *tone=(Tone *)inUserData; // check if the buffer must be rebuilt if (tone->rebuildBuffer) { // precompute some useful qtys float *data=inBuffer->mAudioData; NSUInteger max=inBuffer->mAudioDataBytesCapacity/sizeof(float); // multiplying the argument by 2pi changes the period of the cosine // function to 1s (instead of 2pi). then we must divide by the sample // rate to get TONE_SAMPLERATE samples in one period. CGFloat unit=2.*M_PI/TONE_SAMPLERATE; // this is the amplitude converted from dB to a linear scale CGFloat amplitude=pow(10., tone.dB*.05); // loop and simply set data[i] to the value of cos(...) for (NSUInteger i=0; i<max; ++i) data[i]=(float)(amplitude*cos(unit*(CGFloat)(tone.frequency*i))); // inform the queue that we have filled the buffer inBuffer->mAudioDataByteSize=sizeof(float)*max; // and set flag tone->rebuildBuffer=NO; } // reenqueue the buffer AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL); /* TO DO: the transition between two adjacent buffers (the same one actually) generates a "tick", even if the adjacent buffers represent a continuous signal. maybe using two buffers instead of one would fix it. */ } #pragma - Properties and methods - - (void)play { // generate an AudioTimeStamp with "0" simply! // (copied from FillOutAudioTimeStampWithSampleTime) AudioTimeStamp time; time.mSampleTime=0.; time.mRateScalar=0.; time.mWordClockTime=0.; memset(&time.mSMPTETime, 0, sizeof(SMPTETime)); time.mFlags = kAudioTimeStampSampleTimeValid; // TO DO: maybe it could be useful to check AudioQueueStart return value AudioQueueStart(queue, &time); } - (void)pause { // TO DO: maybe it could be useful to check AudioQueuePause return value AudioQueuePause(queue); } - (void)setFrequency:(NSUInteger)frequency { if (_frequency!=frequency) { _frequency=frequency; // we need to update the buffer (as soon as it stops playing) rebuildBuffer=YES; } } - (void)setDB:(CGFloat)dB { if (dB!=_dB) { _dB=dB; // we need to update the buffer (as soon as it stops playing) rebuildBuffer=YES; } } @end 
  • The class generates a cos oscillogram oscillating at a given integer frequency ( amplitude * cos (2pi * frequency * t) ); all work is done using void handleBuffer(...) using AudioQueue with linear PCM format, mono, float @ 44.1kHz. To change the waveform, you can simply change this line. For example, the following code creates a square shape:

     float x = fmodf(unit*(CGFloat)(tone.frequency*i), 2 * M_PI); data[i] = amplitude * (x > M_PI ? -1.0 : 1.0); 
  • For floating point frequencies, keep in mind that an integer amount of oscillation is not required in one second of audio data, so the presented signal is interrupted at the junction between the two buffers and creates a strange “tick”. For example, you can set fewer samples so that the connection is at the end of the period signal.

  • As Paul P noted, you must first calibrate the hardware to get a reliable conversion between the value you set in your implementation and the sound your device makes. In fact, the floating point samples generated in this code range from -1 to 1, so I just converted the amplitude value to dB ( 20 * log_10 (amplitude) ).
  • Take a look at the comments for other details in the implementation and the “known limitations” (all of these are “TO DO”). The features used are well documented by Apple in their link.
0
source

Source: https://habr.com/ru/post/917583/


All Articles