Convert mismatch maps to 3D points

I have an image mismatch map. I need to convert it to a set of three-dimensional points and normals. How can I do this and is there an existing implementation that can do this

+4
source share
5 answers

Gnu Triangulated Surface Library?

When I did this, before I had a depth map (or a mismatch map, if you prefer), and knowing the initial camera calibration, I was able to re-project back to R3 for points.

Knowing the neighborhoods of each point (from their original neighboring pixels) is pretty trivial to then create a basic triangulation to connect them.

(If you did not know this, you will have to try using Delaunay triangulation or another more advanced algorithm ...)

Make sure that the order of the vertices is correct for each triangle to ensure that all normals point the correct path / sequentially.

Meshlab is very convenient for any additional message processing.

+1
source
cvFindStereoCorrespondenceBM( frame1r, frame2r, disp, BMState); /* cvShowImage("camera1", frame1); cvShowImage("camera2", frame2); */ // cvConvertScale( disp, disp, 16, 0 ); cvNormalize( disp, vdisp, 0, 256, CV_MINMAX ); cvShowImage( "disparity", vdisp ); cvReprojectImageTo3D(disp, Image3D, &_Q); cvShowImage("depthmap",Image3D); 

this piece of code will help you, I hope. Here the explanation of the code is as follows: When I correct the images from the right and left cameras and determine BMstate , I passed this to cvFindStereoCorrespondenceBM to find the mismatch image. Then we define a matrix of dimension 3 in order to save 3D points as Image3D . Using the opencv function cvReprojectImageTo3D , which transfers the Q-matrix, which we get in stereo, we get a set of three-dimensional points corresponding to this 2D image

+1
source
  @ here is calculation which may help you % %Z = fB/d % where % Z = distance along the camera Z axis % f = focal length (in pixels) % B = baseline (in metres) % d = disparity (in pixels) % % After Z is determined, X and Y can be calculated using the usual projective camera equations: % % X = uZ/f % Y = vZ/f % where % u and v are the pixel location in the 2D image % X, Y, Z is the real 3d position % Note: u and v are not the same as row and column. You must account for the image center. You can get the image center using the triclopsGetImageCenter() function. Then you find u and v by: % u = col - centerCol % v = row - centerRow % Note: If u, v, f, and d are all in pixels and X,Y,Z are all in the meters, the units will always work ie pixel/pixel = no-unit-ratio = m/m. 
0
source

The mismatch map gives you x, y, and f (z). You need camera calibration to know how to convert the mismatch to z.

I have an image mismatch card

The image mismatch map will usually be completely flat ...

-1
source

Source: https://habr.com/ru/post/1337892/


All Articles