cvFindStereoCorrespondenceBM( frame1r, frame2r, disp, BMState); /* cvShowImage("camera1", frame1); cvShowImage("camera2", frame2); */ // cvConvertScale( disp, disp, 16, 0 ); cvNormalize( disp, vdisp, 0, 256, CV_MINMAX ); cvShowImage( "disparity", vdisp ); cvReprojectImageTo3D(disp, Image3D, &_Q); cvShowImage("depthmap",Image3D);
this piece of code will help you, I hope. Here the explanation of the code is as follows: When I correct the images from the right and left cameras and determine BMstate , I passed this to cvFindStereoCorrespondenceBM to find the mismatch image. Then we define a matrix of dimension 3 in order to save 3D points as Image3D . Using the opencv function cvReprojectImageTo3D , which transfers the Q-matrix, which we get in stereo, we get a set of three-dimensional points corresponding to this 2D image
source share