How to visualize descriptor mapping using opencv module in python

I am trying to use opencv with python. I wrote the descriptor description code (SIFT, SURF or ORB) in the C ++ version of opencv 2.4. I want to convert this code to opencv using python. I found several documents on how to use opencv functions in C ++, but many of the opencv functions in python I could not find how to use them. Here is my python code, and my current problem is that I don't know how to use the "drawMatches" of opencv C ++ in python. I found cv2.DRAW_MATCHES_FLAGS_DEFAULT, but I have no idea how to use it. Here is my python matching code using ORB descriptors:

im1 = cv2.imread(r'C:\boldt.jpg') im2 = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY) im3 = cv2.imread(r'C:\boldt_resize50.jpg') im4 = cv2.cvtColor(im3, cv2.COLOR_BGR2GRAY) orbDetector2 = cv2.FeatureDetector_create("ORB") orbDescriptorExtractor2 = cv2.DescriptorExtractor_create("ORB") orbDetector4 = cv2.FeatureDetector_create("ORB") orbDescriptorExtractor4 = cv2.DescriptorExtractor_create("ORB") keypoints2 = orbDetector2.detect(im2) (keypoints2, descriptors2) = orbDescriptorExtractor2.compute(im2,keypoints2) keypoints4 = orbDetector4.detect(im4) (keypoints4, descriptors4) = orbDescriptorExtractor4.compute(im4,keypoints4) matcher = cv2.DescriptorMatcher_create('BruteForce-Hamming') raw_matches = matcher.match(descriptors2, descriptors4) img_matches = cv2.DRAW_MATCHES_FLAGS_DEFAULT(im2, keypoints2, im4, keypoints4, raw_matches) cv2.namedWindow("Match") cv2.imshow( "Match", img_matches); 

The error message for the string "img_matches = cv2.DRAW_MATCHES_FLAGS_DEFAULT (im2, keypoints2, im4, keypoints4, raw_matches)"

 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'long' object is not callable 

I spent a lot of time on search documentation and examples of using opencv functions with python. However, I am very upset because there is very little information about using opencv functions in python. This will be very useful if someone can teach me where I can find documentation on how to use each opencv module function in python. I appreciate your time and help.

+8
python image-processing opencv
Jun 20 2018-12-12T00:
source share
3 answers

you can visualize the correspondence of functions in Python as follows. Pay attention to the use of the scipy library.

 # matching features of two images import cv2 import sys import scipy as sp if len(sys.argv) < 3: print 'usage: %s img1 img2' % sys.argv[0] sys.exit(1) img1_path = sys.argv[1] img2_path = sys.argv[2] img1 = cv2.imread(img1_path, cv2.CV_LOAD_IMAGE_GRAYSCALE) img2 = cv2.imread(img2_path, cv2.CV_LOAD_IMAGE_GRAYSCALE) detector = cv2.FeatureDetector_create("SURF") descriptor = cv2.DescriptorExtractor_create("BRIEF") matcher = cv2.DescriptorMatcher_create("BruteForce-Hamming") # detect keypoints kp1 = detector.detect(img1) kp2 = detector.detect(img2) print '#keypoints in image1: %d, image2: %d' % (len(kp1), len(kp2)) # descriptors k1, d1 = descriptor.compute(img1, kp1) k2, d2 = descriptor.compute(img2, kp2) print '#keypoints in image1: %d, image2: %d' % (len(d1), len(d2)) # match the keypoints matches = matcher.match(d1, d2) # visualize the matches print '#matches:', len(matches) dist = [m.distance for m in matches] print 'distance: min: %.3f' % min(dist) print 'distance: mean: %.3f' % (sum(dist) / len(dist)) print 'distance: max: %.3f' % max(dist) # threshold: half the mean thres_dist = (sum(dist) / len(dist)) * 0.5 # keep only the reasonable matches sel_matches = [m for m in matches if m.distance < thres_dist] print '#selected matches:', len(sel_matches) # ##################################### # visualization of the matches h1, w1 = img1.shape[:2] h2, w2 = img2.shape[:2] view = sp.zeros((max(h1, h2), w1 + w2, 3), sp.uint8) view[:h1, :w1, :] = img1 view[:h2, w1:, :] = img2 view[:, :, 1] = view[:, :, 0] view[:, :, 2] = view[:, :, 0] for m in sel_matches: # draw the keypoints # print m.queryIdx, m.trainIdx, m.distance color = tuple([sp.random.randint(0, 255) for _ in xrange(3)]) cv2.line(view, (int(k1[m.queryIdx].pt[0]), int(k1[m.queryIdx].pt[1])) , (int(k2[m.trainIdx].pt[0] + w1), int(k2[m.trainIdx].pt[1])), color) cv2.imshow("view", view) cv2.waitKey() 
+14
Dec 28 '12 at 12:27
source share

I also wrote something myself that just uses the OpenCV Python interface, and I did not use scipy . drawMatches is part of OpenCV 3.0.0 and is not part of OpenCV 2, which I still use. Even though I'm late for the party, here is my own implementation that best imitates drawMatches .

I provided my own images, where one of the people in the camera and the other is the same image, but rotated 55 degrees counterclockwise.

The main premise of what I wrote is that I highlight the output of the RGB image, where the number of rows is the maximum of the two images to fit for both images on the output image, and the columns are just a summation of both columns together. I put each image in their respective spots, and then loop through all the matching key points. I extract which key points match between the two images, then extract their coordinates (x,y) . Then I draw circles in each of the detected places, then draw a line connecting these circles together.

Remember that the detected key point in the second image refers to its own coordinate system. If you want to place this in the final output image, you need to compensate for the column coordinate by the number of columns from the first image so that the column coordinate is relative to the coordinate system of the output image.

Without further ado:

 import numpy as np import cv2 def drawMatches(img1, kp1, img2, kp2, matches): """ My own implementation of cv2.drawMatches as OpenCV 2.4.9 does not have this function available but it supported in OpenCV 3.0.0 This function takes in two images with their associated keypoints, as well as a list of DMatch data structure (matches) that contains which keypoints matched in which images. An image will be produced where a montage is shown with the first image followed by the second image beside it. Keypoints are delineated with circles, while lines are connected between matching keypoints. img1,img2 - Grayscale images kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint detection algorithms matches - A list of matches of corresponding keypoints through any OpenCV keypoint matching algorithm """ # Create a new output image that concatenates the two images together # (aka) a montage rows1 = img1.shape[0] cols1 = img1.shape[1] rows2 = img2.shape[0] cols2 = img2.shape[1] out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8') # Place the first image to the left out[:rows1,:cols1,:] = np.dstack([img1, img1, img1]) # Place the next image to the right of it out[:rows2,cols1:cols1+cols2,:] = np.dstack([img2, img2, img2]) # For each pair of points we have between both images # draw circles, then connect a line between them for mat in matches: # Get the matching keypoints for each of the images img1_idx = mat.queryIdx img2_idx = mat.trainIdx # x - columns # y - rows (x1,y1) = kp1[img1_idx].pt (x2,y2) = kp2[img2_idx].pt # Draw a small circle at both co-ordinates # radius 4 # colour blue # thickness = 1 cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1) cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1) # Draw a line in between the two points # thickness = 1 # colour blue cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1) # Show the image cv2.imshow('Matched Features', out) cv2.waitKey(0) cv2.destroyAllWindows() 



To illustrate this, here are two images that I used:

enter image description here

enter image description here

I used the OpenCV ORB detector to detect key points and used Hamming's normalized distance as a measure of distance for similarity, as it is a binary descriptor. In this way:

 import numpy as np import cv2 img1 = cv2.imread('cameraman.png') # Original image img2 = cv2.imread('cameraman_rot55.png') # Rotated image # Create ORB detector with 1000 keypoints with a scaling pyramid factor # of 1.2 orb = cv2.ORB(1000, 1.2) # Detect keypoints of original image (kp1,des1) = orb.detectAndCompute(img1, None) # Detect keypoints of rotated image (kp2,des2) = orb.detectAndCompute(img2, None) # Create matcher bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) # Do matching matches = bf.match(des1,des2) # Sort the matches based on distance. Least distance # is better matches = sorted(matches, key=lambda val: val.distance) # Show only the top 10 matches drawMatches(img1, kp1, img2, kp2, matches[:10]) 



This is the image I get:

enter image description here

+9
Oct 07 '14 at 15:59
source share

As the error message says, DRAW_MATCHES_FLAGS_DEFAULT is of type "long". This is a constant defined by the cv2 module, not a function. Unfortunately, the function you want, 'drawMatches' only exists in the OpenCV C ++ interface.

+2
Jul 02 2018-12-12T00:
source share



All Articles