SIFT match gives very poor results

I am working on a project where I will use homography as functions in a classifier. My problem is to automatically calculate homographs, I use SIFT descriptors to find the points between the two images on which homography can be calculated, but SIFT gives me very poor results, so I can not use them in my work.

I am using OpenCV 2.4.3.

At first I used SURF, but I had similar results, and I decided to use SIFTs, which are slower but more accurate. My first assumption was that the image resolution in my dataset was too low, but I ran my algorithm based on a modern dataset (Pointing 04), and I got almost the same results, so the problem is what I'm doing not in my dataset.

The match between the SIFT key points found in each image is done using the FlannBased marker, I tried BruteForce, but again the results were almost the same.

This is an example of a match I found (image from Pointing 04 dataset) Matches of my algorithm

The above image shows how bad the match with my program matches. Correctly corresponds to only 1 point. I need (at least) 4 correct matches for what I have to do.

Here is the code I'm using:

This is a function that extracts SIFT descriptors from each image.

void extract_sift(const Mat &img, vector<KeyPoint> &keypoints, Mat &descriptors, Rect* face_rec) { // Create masks for ROI on the original image Mat mask1 = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U Mat roi1(mask1, *face_rec); roi1 = Scalar(255, 255, 255); // Extracts keypoints in ROIs only Ptr<DescriptorExtractor> featExtractor; Ptr<FeatureDetector> featDetector; Ptr<DescriptorMatcher> featMatcher; featExtractor = new SIFT(); featDetector = FeatureDetector::create("SIFT"); featDetector->detect(img,keypoints,mask1); featExtractor->compute(img,keypoints,descriptors); } 

This is a function that matches the descriptors of two images.

 void match_sift(const Mat &img1, const Mat &img2, const vector<KeyPoint> &kp1, const vector<KeyPoint> &kp2, const Mat &descriptors1, const Mat &descriptors2, vector<Point2f> &p_im1, vector<Point2f> &p_im2) { // Matching descriptor vectors using FLANN matcher Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased"); std::vector< DMatch > matches; matcher->match( descriptors1, descriptors2, matches ); double max_dist = 0; double min_dist = 100; // Quick calculation of max and min distances between keypoints for( int i = 0; i < descriptors1.rows; ++i ){ double dist = matches[i].distance; if( dist < min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist; } // Draw only the 4 best matches std::vector< DMatch > good_matches; // XXX: DMatch has no sort method, maybe a more efficent min extraction algorithm can be used here? double min=matches[0].distance; int min_i = 0; for( int i = 0; i < (matches.size()>4?4:matches.size()); ++i ) { for(int j=0;j<matches.size();++j) if(matches[j].distance < min) { min = matches[j].distance; min_i = j; } good_matches.push_back( matches[min_i]); matches.erase(matches.begin() + min_i); min=matches[0].distance; min_i = 0; } Mat img_matches; drawMatches( img1, kp1, img2, kp2, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS ); imwrite("imgMatch.jpeg",img_matches); imshow("",img_matches); waitKey(); for( int i = 0; i < good_matches.size(); i++ ) { // Get the points from the best matches p_im1.push_back( kp1[ good_matches[i].queryIdx ].pt ); p_im2.push_back( kp2[ good_matches[i].trainIdx ].pt ); } } 

And these functions are called here:

 extract_sift(dataset[i].img,dataset[i].keypoints,dataset[i].descriptors,face_rec); 

[...]

 // Extract keypoints from i+1 image and calculate homography extract_sift(dataset[i+1].img,dataset[i+1].keypoints,dataset[i+1].descriptors,face_rec); dataset[front].points_r.clear(); // XXX: dunno if clearing the points every time is the best way to do it.. match_sift(dataset[front].img,dataset[i+1].img,dataset[front].keypoints,dataset[i+1].keypoints, dataset[front].descriptors,dataset[i+1].descriptors,dataset[front].points_r,dataset[i+1].points_r); dataset[i+1].H = findHomography(dataset[front].points_r,dataset[i+1].points_r, RANSAC); 

Any help on how to improve comparable performance would be really appreciated, thanks.

+4
source share
1 answer

You are apparently using the "best four points" in your wrt code to match distances. In other words, you think that matching is valid if both descriptors are really similar. I think this is wrong. Did you try to play all the matches? Many of them should be wrong, but many should be good.

The coincidence distance only shows how similar the two points are. This does not mean that the match is geometrically consistent. Choosing the best matches must take geometry into account.

Here is how I could do it:

  • Angle detection (you already do this)
  • Find matches (you already do this)
  • Try to find a homography transformation between both images with matches (don't filter them before!) Using findHomography(...)
  • findHomography(...) will tell you which are linear. This is your good_matches .
+5
source

Source: https://habr.com/ru/post/1490469/


All Articles