As Iwillnotexist-Idonotexist said, the first problem is the threshold that you apply. Try using a threshold that is independent of the distance between descriptors that do not work well, as some descriptors are much more discriminatory than others. I think this will give you better results. I advise you to use the correlation test proposed by D. Low in SIFT paper. Please see section 7.1: http://cs.ubc.ca/~lowe/papers/ijcv04.pdf
The second problem is that you use BRISK to detect features in your images. There are errors in this OpenCV implementation (you can check here: http://code.opencv.org/issues/3976 ), so try using a different FeatureDetector such as FAST, ORB, etc. (The handle is fine so you can continue to use it)
I finished testing on your pictures, and I managed to get some results with various detectors / descriptors: (key points without matching → yellow)
BRISK detector and descriptor: 
- key points of the left image: 74
- key points of the right image: 86
- matches: 3 (Even with a broken detector, I got matches)
ORB detector with BRISK as a handle: 
- left key image points: 499
- keywords for the right image: 500
- matches: 26
detector and descriptor ORB 
- left key image points: 841
- key points of the right image: 907
- matches: 43
All results were obtained using a relationship test to remove false matches. Hope this helps!
EDIT:
BruteForceMatcher<Hamming> matcher; vector< vector<DMatch> > matches; vector <DMatch> goodMatches; matcher.knnMatch(imgDescriptors1, imgDescriptors2, matches, 2);
source share