The iOS 5.0 documentation states that the type of canonical audio data is 16 bits, signed int ( link ):
Type of sample canonical audio data for input and output.
typedef SInt16 AudioSampleType;
Discussion
Canonical type of audio sampling for input and output in iPhone OS linear PCM with 16-bit integer samples.
However, if I right-click โgo to definitionโ on AudioSampleType
, I see the following definition in CoreAudioTypes.h
:
#if !CA_PREFER_FIXED_POINT typedef Float32 AudioSampleType; typedef Float32 AudioUnitSampleType; #else typedef SInt16 AudioSampleType; typedef SInt32 AudioUnitSampleType; #define kAudioUnitSampleFractionBits 24 #endif
and again, when the jump to def for CA_PREFER_FIXED_POINT
I see:
#if !defined(CA_PREFER_FIXED_POINT) #if TARGET_OS_IPHONE #if (TARGET_CPU_X86 || TARGET_CPU_X86_64 || TARGET_CPU_PPC || TARGET_CPU_PPC64) && !TARGET_IPHONE_SIMULATOR #define CA_PREFER_FIXED_POINT 0 #else #define CA_PREFER_FIXED_POINT 1 #endif #else #define CA_PREFER_FIXED_POINT 0 #endif #endif
Checking my code at runtime, I see that CA_PREFER_FIXED_POINT
is defined as 1, both on the simulator and on my iPod.
So my questions are:
- What is the canonical type? Is it always
SInt16
on the device? - In which case does the 3rd line above evaluate to true? I mean, which device uses iPhone OS and uses one of the listed processors?
- Is there a precedent when I should override
CA_PREFER_FIXED_POINT
to 0 (when programming for iPhone)?
source share