Face Recognition Trimming

I am modifying the Apple SquareCam face detection app to cut out a face before recording it on a camera roll, instead of drawing a red square around the face. I use the same CGRect to crop, as was used to draw the red square. However, the behavior is different. In portrait mode, if the face is in the horizontal center of the screen, it breaks off the face as expected (the same place would be a red square). If the face is turned left or right, the crop is always taken from the middle of the screen, and not where there would be a red square.

Here is the source code for apples:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features inCGImage:(CGImageRef)backgroundImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing { CGImageRef returnImage = NULL; CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage)); CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size); CGContextClearRect(bitmapContext, backgroundImageRect); CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage); CGFloat rotationDegrees = 0.; switch (orientation) { case UIDeviceOrientationPortrait: rotationDegrees = -90.; break; case UIDeviceOrientationPortraitUpsideDown: rotationDegrees = 90.; break; case UIDeviceOrientationLandscapeLeft: if (isFrontFacing) rotationDegrees = 180.; else rotationDegrees = 0.; break; case UIDeviceOrientationLandscapeRight: if (isFrontFacing) rotationDegrees = 0.; else rotationDegrees = 180.; break; case UIDeviceOrientationFaceUp: case UIDeviceOrientationFaceDown: default: break; // leave the layer in its last known orientation } UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees]; // features found by the face detector for ( CIFaceFeature *ff in features ) { CGRect faceRect = [ff bounds]; NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect)); CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]); } returnImage = CGBitmapContextCreateImage(bitmapContext); CGContextRelease (bitmapContext); return returnImage; } 

and my replacement:

 - (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features inCGImage:(CGImageRef)backgroundImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing { CGImageRef returnImage = NULL; //I'm only taking pics with one face. This is just for testing for ( CIFaceFeature *ff in features ) { CGRect faceRect = [ff bounds]; returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect); } return returnImage; } 

Update *

Based on the Wains input, I tried to make my code more like the original, but the result was the same:

 - (NSArray*)extractFaceImages:(NSArray *)features fromCGImage:(CGImageRef)sourceImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing { NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease]; CGImageRef returnImage = NULL; CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage)); CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size); CGContextClearRect(bitmapContext, backgroundImageRect); CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage); CGFloat rotationDegrees = 0.; switch (orientation) { case UIDeviceOrientationPortrait: rotationDegrees = -90.; break; case UIDeviceOrientationPortraitUpsideDown: rotationDegrees = 90.; break; case UIDeviceOrientationLandscapeLeft: if (isFrontFacing) rotationDegrees = 180.; else rotationDegrees = 0.; break; case UIDeviceOrientationLandscapeRight: if (isFrontFacing) rotationDegrees = 0.; else rotationDegrees = 180.; break; case UIDeviceOrientationFaceUp: case UIDeviceOrientationFaceDown: default: break; // leave the layer in its last known orientation } // features found by the face detector for ( CIFaceFeature *ff in features ) { CGRect faceRect = [ff bounds]; NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect)); returnImage = CGBitmapContextCreateImage(bitmapContext); returnImage = CGImageCreateWithImageInRect(returnImage, faceRect); UIImage *clippedFace = [UIImage imageWithCGImage:returnImage]; [faceImages addObject:clippedFace]; } CGContextRelease (bitmapContext); return faceImages; 

}

I took three shots and registered faceRect with these results;

The picture was taken with a face located near the left edge of the device. Capturing the image completely skips the face to the right: faceRect = {{972, 43.0312}, {673.312, 673.312}}

The picture was taken with the face located in the middle of the device. Image capture is good: faceRect = {{1060.59, 536.625}, {668.25, 668.25}}

The picture was taken with a face near the right edge of the device. Image capture completely skips the face to the left: faceRect = {{982.125, 999.844}, {804.938, 804.938}}

Thus, it turns out that "x" and "y" are reversed. I keep the device in portrait, but faceRect is similar to the landscape. However, I cannot figure out how much of Apple's source code takes this into account. The orientation code in this method, apparently, affects only the image with a red square.

+6
source share
2 answers

You have to save the whole source code and just add one line before returning (with the option to generate the image inside the loop, since you only crop the first person):

 returnImage = CGImageCreateWithImageInRect(returnImage, faceRect); 

This allows you to visualize the image in the correct orientation, which means that the front rectangle will be in the right place.

+3
source

You encountered this problem because when the image is saved, it is saved vertically. And the position of faceRect does not exactly match the face. You can solve this problem by changing the poistion faceRect so that it is vertically flipped within the returnImage .

  for ( CIFaceFeature *ff in features ) { faceRect = [ff bounds]; CGRect modifiedRect = CGRectFlipVertical(faceRect,CGRectMake(0,0,CGImageGetWidth(returnImage),CGImageGetHeight(returnImage))); returnImage = CGImageCreateWithImageInRect(returnImage, modifiedRect); UIImage *clippedFace = [UIImage imageWithCGImage:returnImage]; [faceImages addObject:clippedFace]; } 

CGRectFlipVertical(CGRect innerRect, CGRect outerRect) can be defined as follows:

  CGRect CGRectFlipVertical(CGRect innerRect, CGRect outerRect) { CGRect rect = innerRect; rect.origin.y = outerRect.origin.y + outerRect.size.height - (rect.origin.y + rect.size.height); return rect; } 
0
source

Source: https://habr.com/ru/post/947962/


All Articles