I successfully worked with the Haar algorithm in OpenCV-2.1.0 (cvHaarDetectObjects) to detect faces in images and video frames from the Objective-C project for iOS 4.2. However, the processing time for video frames still takes about 1-2 seconds on the iPhone 4 in most conditions. The following is an example of the code I'm using:
NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_alt" ofType:@"xml"]; CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL); CvMemStorage* storage = cvCreateMemStorage(0); CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.2, 0, 0 |CV_HAAR_DO_ROUGH_SEARCH |CV_HAAR_FIND_BIGGEST_OBJECT, cvSize(30, 30));
I tried several optimization methods, including the intelligent use of ROI and the use of integers rather than floats. However, these changes required a huge amount of time and had only negligible benefits.
I was suggested that using LBP could significantly reduce face detection time. I experimented and looked for ways to implement LBP, but to no avail. Opencv has a cascading file (lbpcascade_frontalface.xml), but I can not find any suggestions for using it.
Any help would be appreciated, including other optimization methods and Google links that I might have missed when searching. Detection accuracy is not critical if it is effective enough.
Thanks!
source share