(howto) Get the face from the front camera entrance like UIImage

I solve the problem of scanning the front camera input for faces, detecting them and receiving them as UIImage objects. I use AVFoundation to scan and detect faces.

Like this:

let input = try AVCaptureDeviceInput(device: captureDevice) captureSession = AVCaptureSession() captureSession!.addInput(input) output = AVCaptureMetadataOutput() captureSession?.addOutput(output) output.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue()) output.metadataObjectTypes = [AVMetadataObjectTypeFace] videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill videoPreviewLayer?.frame = view.layer.bounds view.layer.addSublayer(videoPreviewLayer!) captureSession?.startRunning() 

In the didOutputMetadataObjects delegate method, I get the face as AVMetadataFaceObject and select it with a red frame, like this:

 let metadataObj = metadataObjects[0] as! AVMetadataFaceObject let faceObject = videoPreviewLayer?.transformedMetadataObjectForMetadataObject(metadataObj) faceFrame?.frame = faceObject!.bounds 

Question: how can I get faces like UIImages ?

I tried dancing over didOutputSampleBuffer but it is not called at all: c

+5
source share
4 answers

I did the same using didOutputSampleBuffer and Objective-C. It looks like this:

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate); CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(__bridge NSDictionary *)attachments]; if (attachments) CFRelease(attachments); NSNumber *orientation = (__bridge NSNumber *)(CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyOrientation, NULL)); NSArray *features = [[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:@{ CIDetectorAccuracy: CIDetectorAccuracyHigh }] featuresInImage:ciImage options:@{ CIDetectorImageOrientation: orientation }]; if (features.count == 1) { CIFaceFeature *faceFeature = [features firstObject]; CGRect faceRect = faceFeature.bounds; CGImageRef tempImage = [[CIContext contextWithOptions:nil] createCGImage:ciImage fromRect:ciImage.extent]; UIImage *image = [UIImage imageWithCGImage:tempImage scale:1.0 orientation:orientation.intValue]; UIImage *face = [image extractFace:faceRect]; } } 

where extractFace is an extension of UIImage:

 - (UIImage *)extractFace:(CGRect)rect { rect = CGRectMake(rect.origin.x * self.scale, rect.origin.y * self.scale, rect.size.width * self.scale, rect.size.height * self.scale); CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect); UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation]; CGImageRelease(imageRef); return result; } 

Creating a video output:

 AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init]; videoOutput.videoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey: [NSNumber numberWithInt:kCMPixelFormat_32BGRA] }; videoOutput.alwaysDiscardsLateVideoFrames = YES; self.videoOutputQueue = dispatch_queue_create("OutputQueue", DISPATCH_QUEUE_SERIAL); [videoOutput setSampleBufferDelegate:self queue:self.videoOutputQueue]; [self.session addOutput:videoOutput]; 
+3
source
 - (UIImage *) screenshot { CGSize size = CGSizeMake(faceFrame.frame.size.width, faceFrame.frame.size.height); UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale); CGRect rec = CGRectMake(faceFrame.frame.origin.x, faceFrame.frame.orogin.y, faceFrame.frame.size.width, faceFrame.frame.size.height); [_viewController.view drawViewHierarchyInRect:rec afterScreenUpdates:YES]; UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; } 

Take some cue on top

  let contextImage: UIImage = <<screenshot>>! let cropRect: CGRect = CGRectMake(x, y, width, height) let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, cropRect) let image: UIImage = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)! 
+1
source

I suggest using the UIImagePickerController class to implement your custom camera to select multiple images for face detection. Please check out the Apple PhotoPicker sample code.

For some of them, use the UIImagePickerController to launch the camera as sourceType. And handle its delegate imagePickerController:didFinishPickingMediaWithInfo: to capture image +, you can also check the takePicture function if it helps

0
source

Source: https://habr.com/ru/post/1247619/


All Articles