I am modifying the Apple SquareCam face detection app to cut out a face before recording it on a camera roll, instead of drawing a red square around the face. I use the same CGRect to crop, as was used to draw the red square. However, the behavior is different. In portrait mode, if the face is in the horizontal center of the screen, it breaks off the face as expected (the same place would be a red square). If the face is turned left or right, the crop is always taken from the middle of the screen, and not where there would be a red square.
Here is the source code for apples:
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features inCGImage:(CGImageRef)backgroundImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing { CGImageRef returnImage = NULL; CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage)); CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size); CGContextClearRect(bitmapContext, backgroundImageRect); CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage); CGFloat rotationDegrees = 0.; switch (orientation) { case UIDeviceOrientationPortrait: rotationDegrees = -90.; break; case UIDeviceOrientationPortraitUpsideDown: rotationDegrees = 90.; break; case UIDeviceOrientationLandscapeLeft: if (isFrontFacing) rotationDegrees = 180.; else rotationDegrees = 0.; break; case UIDeviceOrientationLandscapeRight: if (isFrontFacing) rotationDegrees = 0.; else rotationDegrees = 180.; break; case UIDeviceOrientationFaceUp: case UIDeviceOrientationFaceDown: default: break;
and my replacement:
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features inCGImage:(CGImageRef)backgroundImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing { CGImageRef returnImage = NULL;
Update *
Based on the Wains input, I tried to make my code more like the original, but the result was the same:
- (NSArray*)extractFaceImages:(NSArray *)features fromCGImage:(CGImageRef)sourceImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing { NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease]; CGImageRef returnImage = NULL; CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage)); CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size); CGContextClearRect(bitmapContext, backgroundImageRect); CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage); CGFloat rotationDegrees = 0.; switch (orientation) { case UIDeviceOrientationPortrait: rotationDegrees = -90.; break; case UIDeviceOrientationPortraitUpsideDown: rotationDegrees = 90.; break; case UIDeviceOrientationLandscapeLeft: if (isFrontFacing) rotationDegrees = 180.; else rotationDegrees = 0.; break; case UIDeviceOrientationLandscapeRight: if (isFrontFacing) rotationDegrees = 0.; else rotationDegrees = 180.; break; case UIDeviceOrientationFaceUp: case UIDeviceOrientationFaceDown: default: break;
}
I took three shots and registered faceRect with these results;
The picture was taken with a face located near the left edge of the device. Capturing the image completely skips the face to the right: faceRect = {{972, 43.0312}, {673.312, 673.312}}
The picture was taken with the face located in the middle of the device. Image capture is good: faceRect = {{1060.59, 536.625}, {668.25, 668.25}}
The picture was taken with a face near the right edge of the device. Image capture completely skips the face to the left: faceRect = {{982.125, 999.844}, {804.938, 804.938}}
Thus, it turns out that "x" and "y" are reversed. I keep the device in portrait, but faceRect is similar to the landscape. However, I cannot figure out how much of Apple's source code takes this into account. The orientation code in this method, apparently, affects only the image with a red square.