I am trying to identify faces in the iOS app for the camera, but it does not work properly while it works in Camera.app . Note that:
- The first face is not found in my application, only in Camera.app.
- For a third person - an East Asian woman - Camera.app correctly draws a rectangle around her face, while my application draws a rectangle that extends well below her face.
- Obama's face is not found in my application, only in Camera.app.
- When the camera approaches Putinβs face, my application draws a rectangle over the right half of his face, cutting it in half, while Camera.app correctly draws a rectangle around his face.
Why is this happening?
My code is as follows. Do you see something wrong?
First, I create the video output as follows:
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings =
[kCVPixelBufferPixelFormatTypeKey as AnyHashable:
Int(kCMPixelFormat_32BGRA)]
session.addOutput(videoOutput)
videoOutput.setSampleBufferDelegate(faceDetector, queue: faceDetectionQueue)
This is the delegate:
class FaceDetector: NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let features = FaceDetector.ciDetector.features(
in: CIImage(cvPixelBuffer: imageBuffer))
let faces = features.map { $0.bounds }
let imageSize = CVImageBufferGetDisplaySize(imageBuffer)
let faceBounds = faces.map { (face: CIFeature) -> CGRect in
var ciBounds = face.bounds
ciBounds = ciBounds.applying(
CGAffineTransform(scaleX: 1/imageSize.width, y: -1/imageSize.height))
CGRect(x: 0, y: 0, width: 1, height: -1).verifyContains(ciBounds)
let bounds = ciBounds.applying(CGAffineTransform(translationX: 0, y: 1.0))
CGRect(x: 0, y: 0, width: 1, height: 1).verifyContains(bounds)
return bounds
}
DispatchQueue.main.sync {
facesUpdated(faceBounds, imageSize)
}
}
private static let ciDetector = CIDetector(ofType: CIDetectorTypeFace,
context: nil,
options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!
}
The faceUpdated () callback looks like this:
class PreviewView: UIView {
private var faceRects = [UIView]()
private static func makeFaceRect() -> UIView {
let r = UIView()
r.layer.borderWidth = FocusRect.borderWidth
r.layer.borderColor = FocusRect.color.cgColor
faceRects.append(r)
addSubview(r)
return r
}
private func removeAllFaceRects() {
for faceRect in faceRects {
verify(faceRect.superview == self)
faceRect.removeFromSuperview()
}
faceRects.removeAll()
}
private func facesUpdated(_ faces: [CGRect], _ imageSize: CGSize) {
removeAllFaceRects()
let faceFrames = faces.map { (original: CGRect) -> CGRect in
let face = original.applying(CGAffineTransform(scaleX: bounds.width, y: bounds.height))
verify(self.bounds.contains(face))
return face
}
for faceFrame in faceFrames {
let faceRect = PreviewView.makeFaceRect()
faceRect.frame = faceFrame
}
}
}
I also tried the following, but they did not help:
- Set AVCaptureVideoDataOutput videoSettings to zero.
- Explicitly setting the orientation of the CIDetector for the portrait. The phone in this test is portrait, so it should not matter.
- Install and Uninstall CIDetectorTracking: true
- Install and Uninstall CIDetectorAccuracy: CIDetectorAccuracyHigh
- Trying to track only one face, looking only at the first detected function.
- Replacing CVImageBufferGetDisplaySize () with CVImageBufferGetEncodedSize () - they are the same anyway, at 1440 x 1080.