On face recognition - Swift

I have an application that accepts all user images (all Assets from the Photos application). After that, the application should start all images and detect faces and return their facial landmarks, and then look in the database to see if there is any friend with the same landmark (recognizing the faces of acquaintances), similar to how Facebook does in moments applications and on the Internet. Then the application will display all the photos that are displayed in each friend. An important part of my application is user privacy, so I would like to save the whole process on the device and not send it to the online service. Another advantage of storing it on the device is that every user in my application can have thousands of images,and work with an external service will be extensive and may reduce performance (if each image needs to be sent to the server).

From my research, there are many online services (but they do not meet my requirements, without supporting the process offline). There is also a CIDector that detects faces, and then you can return several functions, such as the location of the eyes and the location of the mouth (which I do not think is good enough for reliable recognition). I also heard about Luxand, openCV and openFace, which all relate to device recognition, but are C ++ class, which makes integration with a fast project difficult (the documentation is not very good and does not explain how to integrate it into your project and how to quickly recognize a face )

So my question is, is there a way to recognize faces that return facial landmarks on the device?

  • If not, there is another way or service that I could use.

, , .

, , , .

+4
1

iOS CoreImage, . .. , , .

func detect() {

guard let personciImage = CIImage(image: personPic.image!) else {
    return
}

let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector.featuresInImage(personciImage)

// converting to other coordinate system
let ciImageSize = personciImage.extent.size
var transform = CGAffineTransformMakeScale(1, -1)
transform = CGAffineTransformTranslate(transform, 0, -ciImageSize.height)

for face in faces as! [CIFaceFeature] {

    print("Found bounds are \(face.bounds)")

    // calculating place for faceBox
    var faceViewBounds = CGRectApplyAffineTransform(face.bounds, transform)

    let viewSize = personPic.bounds.size
    let scale = min(viewSize.width / ciImageSize.width,
                    viewSize.height / ciImageSize.height)
    let offsetX = (viewSize.width - ciImageSize.width * scale) / 2
    let offsetY = (viewSize.height - ciImageSize.height * scale) / 2

    faceViewBounds = CGRectApplyAffineTransform(faceViewBounds, CGAffineTransformMakeScale(scale, scale))
    faceViewBounds.origin.x += offsetX
    faceViewBounds.origin.y += offsetY

    let faceBox = UIView(frame: faceViewBounds)

    faceBox.layer.borderWidth = 3
    faceBox.layer.borderColor = UIColor.redColor().CGColor
    faceBox.backgroundColor = UIColor.clearColor()
    personPic.addSubview(faceBox)

    if face.hasLeftEyePosition {
        print("Left eye bounds are \(face.leftEyePosition)")
    }

    if face.hasRightEyePosition {
        print("Right eye bounds are \(face.rightEyePosition)")
    }
}

}

+2

Source: https://habr.com/ru/post/1655538/


All Articles