I want to create a classifier to identify an insect by its captured image. The first time I used HuMomemnts, but images shot in different resolutions gave incorrect results, since HuMoments is a large-scale option. After several searches on the Internet, I found that using SIFT and SURF could solve my problem, and so I tried to understand what happens when I use SIFT. The first two images below relate to different types of insects. The results were strange since all of the 400 functions were compared (see 3rd image).



int main() { Mat src = imread(firstInsect); Mat src2 = imread("secondInsect"); if(src.empty() || src2.empty()) { printf("Can not read one of the image\n"); return -1; } //Detect key point in the image SiftFeatureDetector detector(400); vector<KeyPoint> keypoints; detector.detect(src, keypoints); //cout << keypoints.size() << " of keypoints are found" << endl; cv::FileStorage fs(firstInsectXML, FileStorage::WRITE); detector.write(fs); fs.release(); SiftFeatureDetector detector2(400); vector<KeyPoint> keypoints2; detector.detect(src2, keypoints2); cv::FileStorage fs2(secondInsectXML, FileStorage::WRITE); detector.write(fs2); fs2.release(); //Compute the SIFT feature descriptors for the keypoints //Multiple features can be extracted from a single keypoint, so the result is a //matrix where row "i" is the list of features for keypoint "i" SiftDescriptorExtractor extractor; Mat descriptors; extractor.compute(src, keypoints, descriptors); SiftDescriptorExtractor extractor2; Mat descriptors2; extractor.compute(src2, keypoints2, descriptors2); //Print some statistics on the matrices returned //Size size = descriptors.size(); //cout<<"Query descriptors height: "<<size.height<< " width: "<<size.width<< " area: "<<size.area() << " non-zero: "<<countNonZero(descriptors)<<endl; //saveKeypoints(keypoints, detector); Mat output; drawKeypoints(src, keypoints, output, Scalar(0, 0, 255), DrawMatchesFlags::DEFAULT); imwrite(firstInsectPicture, output); Mat output2; drawKeypoints(src2, keypoints2, output2, Scalar(0, 0, 255), DrawMatchesFlags::DEFAULT); imwrite(secondInsectPicture, output2); //Corresponded points BFMatcher matcher(NORM_L2); vector<DMatch> matches; matcher.match(descriptors, descriptors2, matches); cout<< "Number of matches: "<<matches.size()<<endl; Mat img_matches; drawMatches(src, keypoints, src2, keypoints2, matches, img_matches); imwrite(resultPicture, img_matches); system("PAUSE"); waitKey(10000); return 0;}
Question 1: Why do all functions correspond to these two images?
Question 2: (.xml file i.e.)? How can I store image features so that functions can be stored in order to train them in a classification tree (i.e. a random tree)