Functional detector and descriptor for low resolution images

I work with low resolutions (VGA) and jpg-compressed image sequences for visual navigation on a mobile robot. I am currently using SURF to detect key points and extract descriptors from images, and FLANN to track them. I get 4000-5000 objects per image, and usually 350-450 matches are performed per pair of consecutive images before applying RANSAC (which usually reduces the number of matches by 20%)

I am trying to increase the number (and quality) of matches. I tried two other detectors: SIFT and ORB. SIFT significantly increases the number of functions (35% more than the monitored functions, in general), but it works much slower. ORB extracts about as many functions as SURF, but matching performance is much lower (~ 100 matches at best). My implementation in opencv ORB:

cv::ORB orb = cv::ORB(10000, 1.2f, 8, 31);
orb(frame->img, cv::Mat(), im_keypoints, frame->descriptors);
frame->descriptors.convertTo(frame->descriptors, CV_32F); //so that is the same type as m_dists

And then, when matching:

cv::Mat m_indices(descriptors1.rows, 2, CV_32S);
cv::Mat m_dists(descriptors1.rows, 2, CV_32F);
cv::flann::Index flann_index(descriptors2, cv::flann::KDTreeIndexParams(6));
flann_index.knnSearch(descriptors1, m_indices, m_dists, 2, cv::flann::SearchParams(64) );

What are the best features of the detector and extractor when working with low resolution images and noise? Should I change any parameter in FLANN depending on the function detector used?

EDIT:

I post a few photos with a fairly simple sequence for tracking. Pictures of how I will give them to the trait detector methods. They were pre-processed to eliminate noise (using cv::bilateralFilter())

enter image description here Image 2

+4
3

Pyramidal Lucas Kanade - . , . . 21x21 , . SIFT, SURF, FAST GFT . .

+2

, , .

0

ORB . ORB, , .

h t t p s://github.com/wher0001/Image-Capture-and-Processing

ORB, , , , , . ORB Matching -

nfeatures high (5000) . 500, . , , nfeatures X .

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread("c:/Users/rwheatley/Desktop/pS8zi.jpg")
img2 = cv2.imread("c:/Users/rwheatley/Desktop/vertrk.jpg")

grey1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
grey2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# Initiate ORB detector
orb = cv2.ORB_create(nfeatures=5000)

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(grey1,None)
kp2, des2 = orb.detectAndCompute(grey2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)

# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches,None,flags=2)
print(len(matches))

plt.imshow(img3),plt.show()

, ( ) - Dell. ORB knnMatching

import numpy as np
import cv2
from matplotlib import pyplot as plt

img1 = cv2.imread("c:/Users/rwheatley/Desktop/pS8zi.jpg")
img2 = cv2.imread("c:/Users/rwheatley/Desktop/vertrk.jpg")

grey1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
grey2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# Initiate ORB detector
orb = cv2.ORB_create(nfeatures=5000)

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(grey1,None)
kp2, des2 = orb.detectAndCompute(grey2,None)

# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])

# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=2)
print(len(matches))

plt.imshow(img3),plt.show()

, OpenCV. Flann. , .

, , , .

0

Source: https://habr.com/ru/post/1546211/


All Articles