I took a sample from OpenCV sources and tried to use it in iOS, I did the following:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // get cv::Mat from CMSampleBufferRef UIImage * img = [self imageFromSampleBuffer: sampleBuffer]; cv::Mat cvImg = [img CVGrayscaleMat]; cv::HOGDescriptor hog; hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector()); cv::vector<cv::Rect> found; hog.detectMultiScale(cvImg, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2); for( int i = 0; i < (int)found.size(); i++ ) { cv::Rect r = found[i]; dispatch_async(dispatch_get_main_queue(), ^{ self.label.text = [NSString stringWithFormat:@"Found at %d, %d, %d, %d", rx, ry, r.width, r.height]; }); NSLog(@"Found at %d, %d, %d, %d", rx, ry, r.width, r.height); } }
where CVGrayscaleMat was
-(cv::Mat)CVGrayscaleMat { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGFloat cols = self.size.width; CGFloat rows = self.size.height; cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNone | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage); CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace); return cvMat; }
and imageFromSampleBuffer was a sample from Apple docs. The fact is that the application cannot detect people, I tried different sizes and poses - nothing works for me. What am I missing?