AVCaptureVideoPreviewLayer and OpenCV

I had a problem displaying the result on preivewLayer using the iOS SDK. I will need a little advice from you to understand how to "replace" a captured frame with a processed one.

The fact is that I have a very standard customized application for AVCapture mixed with the OpenCV framework.

-(void)captureOutput... { // I think that is here where I have to do operations... so // for example a simple RGB->GRAY... UIImage* img = [[UIImage alloc] initWithCMSampleBuffer:sampleBuffer]; cv::Mat m_img; m_img = [img CVMat]; // you can do this using m_img both as a src and dst cv::cvtColor( m_img, m_img, CV_BGR2GRAY ); img = [[UIImage alloc] initWithCvMat:m_img]; [previewLayer setContent:[img CGImage]]; } 

Obviously, this is not so right. For example, the size of the new content is incorrectly resized, and the captured frame is right, because I set

 [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect]; 

But the β€œbiggest” problem is that my processed image is behind the captured frame, and the video is quite liquid, while my processed video (behind the scenes) is not so strong (even if I just do nothing, and I directly I assign the same image).

Can any of you help me in understanding how to apply the processed image directly on the preliminary layer (in this case, OpenCV is used).

Thank you very much...

+4
source share

Source: https://habr.com/ru/post/1489774/


All Articles