I am creating an application for the iPhone that allows the user to create a sound filter and check it on some recorded sound. I am trying to do the following:
- I create two audio files called "recordeAudio.aiff" and "filterAudio.aiff"
- I record sound using a microphone and save it in "recordAudio.aiff"
- I copy the audio data from "registeredAudio.aiff" to the clipboard
- Later, I will do some filtering of the sound on the data that is in the buffer at this moment, but for testing purposes, I just want to reduce the value of each sample by half (which will simply reduce the volume by half) so I'm sure I can control individual samples
- I write the result to the second buffer
- I write the data of this buffer to the second file "filterAudio.aiff"
- I play the second file
The problem is this: while I just copy data from one buffer to another, and then write it to a second audio file, everything works fine. But as soon as I try to perform some operation on the samples (for example, dividing them by 2), the result will be just random noise. This makes me suspect that I am not interpreting the meaning of the audio data correctly, but I have tried it for five days now and I just donβt understand. If you have an idea of ββhow to access and use individual audio samples, please help me with this, I would really appreciate it! Thanks!
This is the code that will perform the filtering later (now it should just divide all the sound fragments by 2);
OSStatus status = noErr; UInt32 propertySizeDataPacketCount; UInt32 writabilityDataPacketCount; UInt32 numberOfPackets; UInt32 propertySizeMaxPacketSize; UInt32 writabilityMaxPacketSize; UInt32 maxPacketSize; UInt32 numberOfBytesRead; UInt32 numberOfBytesToWrite; UInt32 propertySizeDataByteCount; SInt64 currentPacket; double x0; double x1; status = AudioFileOpenURL(audioFiles->recordedFile, kAudioFileReadPermission, kAudioFileAIFFType, &audioFiles->inputFile); status = AudioFileOpenURL(audioFiles->filteredFile, kAudioFileReadWritePermission, kAudioFileAIFFType, &audioFiles->outputFile); status = AudioFileGetPropertyInfo(audioFiles->inputFile, kAudioFilePropertyAudioDataPacketCount, &propertySizeDataPacketCount, &writabilityDataPacketCount); status = AudioFileGetProperty(audioFiles->inputFile, kAudioFilePropertyAudioDataPacketCount, &propertySizeDataPacketCount, &numberOfPackets); status = AudioFileGetPropertyInfo (audioFiles->inputFile, kAudioFilePropertyMaximumPacketSize, &propertySizeMaxPacketSize, &writabilityMaxPacketSize); status = AudioFileGetProperty(audioFiles->inputFile, kAudioFilePropertyMaximumPacketSize, &propertySizeMaxPacketSize, &maxPacketSize); SInt16 *inputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize); SInt16 *outputBuffer = (SInt16 *)malloc(numberOfPackets * maxPacketSize); currentPacket = 0; status = AudioFileReadPackets(audioFiles->inputFile, false, &numberOfBytesRead, NULL, currentPacket, &numberOfPackets, inputBuffer); for (int i = 0; i < numberOfPackets; i++) { x0 = (double)inputBuffer[i]; x1 = 0.5 * x0;
To create audio files, I use the following code:
#import "AudioFiles.h" #define SAMPLE_RATE 44100 #define FRAMES_PER_PACKET 1 #define CHANNELS_PER_FRAME 1 #define BYTES_PER_FRAME 2 #define BYTES_PER_PACKET 2 #define BITS_PER_CHANNEL 16 @implementation AudioFiles -(void)setupAudioFormat:(AudioStreamBasicDescription *)format { format->mSampleRate = SAMPLE_RATE; format->mFormatID = kAudioFormatLinearPCM; format->mFramesPerPacket = FRAMES_PER_PACKET; format->mChannelsPerFrame = CHANNELS_PER_FRAME; format->mBytesPerFrame = BYTES_PER_FRAME; format->mBytesPerPacket = BYTES_PER_PACKET; format->mBitsPerChannel = BITS_PER_CHANNEL; format->mReserved = 0; format->mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; } - (id)init { self = [super init]; if (self) { char path[256]; NSArray *dirPaths; NSString *docsDir; dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); docsDir = [dirPaths objectAtIndex:0]; NSString *recordedFilePath = [docsDir stringByAppendingPathComponent:@"/recordedAudio.aiff"]; [recordedFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding]; recordedFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false); recordedFileURL = [NSURL fileURLWithPath:recordedFilePath]; dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); docsDir = [dirPaths objectAtIndex:0]; NSString *filteredFilePath = [docsDir stringByAppendingPathComponent:@"/filteredAudio.aiff"]; [filteredFilePath getCString:path maxLength:sizeof(path) encoding:NSUTF8StringEncoding]; filteredFile = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8 *)path, strlen(path), false); filteredFileURL = [NSURL fileURLWithPath:filteredFilePath]; AudioStreamBasicDescription audioFileFormat; [self setupAudioFormat:&audioFileFormat]; OSStatus status = noErr; status = AudioFileCreateWithURL(recordedFile, kAudioFileAIFFType, &audioFileFormat, kAudioFileFlags_EraseFile, &inputFile); status = AudioFileCreateWithURL(filteredFile, kAudioFileAIFFType, &audioFileFormat, kAudioFileFlags_EraseFile, &outputFile); } return self; } @end
For recording, I use AVAudioRecorder with the following settings:
NSDictionary *recordSettings = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithFloat: 8000.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, [NSNumber numberWithInt:16], AVEncoderBitRateKey, [NSNumber numberWithBool:YES],AVLinearPCMIsBigEndianKey, [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey, [NSNumber numberWithInt:16],AVLinearPCMBitDepthKey, [NSNumber numberWithBool:YES], AVLinearPCMIsNonInterleaved, nil]; NSError *error = nil; audioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFiles->recordedFileURL settings:recordSettings error:&error]; if (error) { NSLog(@"error: %@", [error localizedDescription]); } else { [audioRecorder prepareToRecord]; }