My goal is to take the image as a query and find its best match in the image library. I use the SURF functions in openCV 3.0.0 and the Bag of Words approach to find a match. I need a way to find out if the request image has a match in the library. If so, I want to know the index of the image, which is the closest match.
Here is my code for reading in all images (300 in total in the image library) and extracting and clustering functions:
Mat training_descriptors(1, extractor->descriptorSize(), extractor->descriptorType()); //read in all images and set to binary char filepath[1000]; for (int i = 1; i < trainingSetSize; i++){ cout << "in for loop, iteration: " << i << endl; _snprintf_s(filepath, 100, "C:/Users/Randal/Desktop/TestCase1Training/%d.bmp", i); Mat temp = imread(filepath, CV_LOAD_IMAGE_GRAYSCALE); Mat tempBW; adaptiveThreshold(temp, tempBW, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 11, 2); detector->detect(tempBW, keypoints1); extractor->compute(tempBW, keypoints1, descriptors1); training_descriptors.push_back(descriptors1); cout << "descriptors added" << endl; } cout << "Total descriptors: " << training_descriptors.rows << endl; trainer.add(training_descriptors); Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased"); BOWImgDescriptorExtractor BOW(extractor, matcher); Mat library = trainer.cluster(); BOW.setVocabulary(library);
I wrote the following code to find a match. The problem is that BOW.compute returns only the indexes of the clusters (words) that exist both in the image and in the image library. imgQ is a request image.
Mat output; Mat imgQBW; adaptiveThreshold(imgQ, imgQBW, 255, ADAPTIVE_THRESH_GAUSSIAN_C, THRESH_BINARY, 11, 2); imshow("query image", imgQBW); detector->detect(imgQBW, keypoints2); extractor->compute(imgQBW, keypoints2, descriptors2); BOW.compute(imgQBW, keypoints1, output); cout << output.row(0) << endl;
I need to know which clusters in BoW correspond to the images. My output right now - output.row (0) is just an array with all the cluster indexes found in the library. Do I really not understand this conclusion? Is there a way to determine which image has the most suitable clusters?