I am trying to combine CoreML and ARKit in my project using this inceptionV3 model on an Apple website .
I start with the standard template for ARKit (Xcode 9 beta 3)
Instead of initiating a new camera session, I am reusing a session that was launched by ARSCNView.
At the end of my view of Debate, I write:
sceneView.session.delegate = self
Then I extend my viewController to match the ARSessionDelegate protocol (optional protocol)
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
do {
let prediction = try self.model.prediction(image: frame.capturedImage)
DispatchQueue.main.async {
if let prob = prediction.classLabelProbs[prediction.classLabel] {
self.textLabel.text = "\(prediction.classLabel) \(String(describing: prob))"
}
}
}
catch let error as NSError {
print("Unexpected error ocurred: \(error.localizedDescription).")
}
}
}
At first I tried this code, but then I noticed that creating a pixel buffer like Image is required. <RGB, <299299>.
, , , . ( https://github.com/yulingtianxia/Core-ML-Sample)
func resize(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
let imageSide = 299
var ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
let transform = CGAffineTransform(scaleX: CGFloat(imageSide) / CGFloat(CVPixelBufferGetWidth(pixelBuffer)), y: CGFloat(imageSide) / CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
ciImage = ciImage.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: imageSide, height: imageSide))
let ciContext = CIContext()
var resizeBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, imageSide, imageSide, CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &resizeBuffer)
ciContext.render(ciImage, to: resizeBuffer!)
return resizeBuffer
}
, , . , :
Unexpected error ocurred: Input image feature image does not match model description.
2017-07-20 AR+MLPhotoDuplicatePrediction[928:298214] [core]
Error Domain=com.apple.CoreML Code=1
"Input image feature image does not match model description"
UserInfo={NSLocalizedDescription=Input image feature image does not match model description,
NSUnderlyingError=0x1c4a49fc0 {Error Domain=com.apple.CoreML Code=1
"Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)"
UserInfo={NSLocalizedDescription=Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)}}}
, .
- , .
. resizePixelBuffer YOLO-CoreML-MPSNNGraph, @dfd, .
Edit2. kCVPixelFormatType_32BGRA ( , pixelBuffer, resizePixelBuffer).
let pixelFormat = kCVPixelFormatType_32BGRA
. , AVCaptureSession . , , Enric_SA Apple.
Edit3. rickster. inceptionV3. (VNClassificationObservation). TinyYolo. . .