I also wrote something myself that just uses the OpenCV Python interface, and I did not use scipy . drawMatches is part of OpenCV 3.0.0 and is not part of OpenCV 2, which I still use. Even though I'm late for the party, here is my own implementation that best imitates drawMatches .
I provided my own images, where one of the people in the camera and the other is the same image, but rotated 55 degrees counterclockwise.
The main premise of what I wrote is that I highlight the output of the RGB image, where the number of rows is the maximum of the two images to fit for both images on the output image, and the columns are just a summation of both columns together. I put each image in their respective spots, and then loop through all the matching key points. I extract which key points match between the two images, then extract their coordinates (x,y) . Then I draw circles in each of the detected places, then draw a line connecting these circles together.
Remember that the detected key point in the second image refers to its own coordinate system. If you want to place this in the final output image, you need to compensate for the column coordinate by the number of columns from the first image so that the column coordinate is relative to the coordinate system of the output image.
Without further ado:
import numpy as np import cv2 def drawMatches(img1, kp1, img2, kp2, matches): """ My own implementation of cv2.drawMatches as OpenCV 2.4.9 does not have this function available but it supported in OpenCV 3.0.0 This function takes in two images with their associated keypoints, as well as a list of DMatch data structure (matches) that contains which keypoints matched in which images. An image will be produced where a montage is shown with the first image followed by the second image beside it. Keypoints are delineated with circles, while lines are connected between matching keypoints. img1,img2 - Grayscale images kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint detection algorithms matches - A list of matches of corresponding keypoints through any OpenCV keypoint matching algorithm """ # Create a new output image that concatenates the two images together # (aka) a montage rows1 = img1.shape[0] cols1 = img1.shape[1] rows2 = img2.shape[0] cols2 = img2.shape[1] out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8') # Place the first image to the left out[:rows1,:cols1,:] = np.dstack([img1, img1, img1]) # Place the next image to the right of it out[:rows2,cols1:cols1+cols2,:] = np.dstack([img2, img2, img2]) # For each pair of points we have between both images # draw circles, then connect a line between them for mat in matches: # Get the matching keypoints for each of the images img1_idx = mat.queryIdx img2_idx = mat.trainIdx # x - columns # y - rows (x1,y1) = kp1[img1_idx].pt (x2,y2) = kp2[img2_idx].pt # Draw a small circle at both co-ordinates # radius 4 # colour blue # thickness = 1 cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1) cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1) # Draw a line in between the two points # thickness = 1 # colour blue cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1) # Show the image cv2.imshow('Matched Features', out) cv2.waitKey(0) cv2.destroyAllWindows()
To illustrate this, here are two images that I used:


I used the OpenCV ORB detector to detect key points and used Hamming's normalized distance as a measure of distance for similarity, as it is a binary descriptor. In this way:
import numpy as np import cv2 img1 = cv2.imread('cameraman.png')
This is the image I get:
