Difficult to stitch images with OpenCV

I am currently working on stitching images using OpenCV 2.3.1 on Visual Studio 2010, but I am having problems.

Description of the problem I am trying to write a code for stitching several images obtained from several cameras (about 3 ~ 4), i, e, the code should continue to stitch images until I ask him to stop.

The following is what I have done so far: (For simplicity, I replaced part of the code with just a few words)

1.Reading frames(images) from 2 cameras (Currently I'm just working on 2 cameras.) 2.Feature detection, descriptor calculation (SURF) 3.Feature matching using FlannBasedMatcher 4.Removing outliers and calculate the Homography with inliers using RANSAC. 5.Warp one of both images. 

For step 5. I followed the answer in the following thread and simply changed some parameters: Stitching 2 images in opencv

However, the result is terrible. I just uploaded the result to youtube, and, of course, only those with a link can see it.

http://youtu.be/Oy5z_7LeaMk

My code is shown below: (Only important parts are shown)

 VideoCapture cam1, cam2; cam1.open(0); cam2.open(1); while(1) { Mat frm1, frm2; cam1 >> frm1; cam2 >> frm2; //(SURF detection, descriptor calculation //and matching using FlannBasedMatcher) double max_dist = 0; double min_dist = 100; //-- Quick calculation of max and min distances between keypoints for( int i = 0; i < descriptors_1.rows; i++ ) { double dist = matches[i].distance; if( dist < min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist; } (Draw only "good" matches (ie whose distance is less than 3*min_dist )) vector<Point2f> frame1; vector<Point2f> frame2; for( int i = 0; i < good_matches.size(); i++ ) { //-- Get the keypoints from the good matches frame1.push_back( keypoints_1[ good_matches[i].queryIdx ].pt ); frame2.push_back( keypoints_2[ good_matches[i].trainIdx ].pt ); } Mat H = findHomography( Mat(frame1), Mat(frame2), CV_RANSAC ); cout << "Homography: " << H << endl; /* warp the image */ Mat warpImage2; warpPerspective(frm2, warpImage2, H, Size(frm2.cols, frm2.rows), INTER_CUBIC); Mat final(Size(frm2.cols*3 + frm1.cols, frm2.rows),CV_8UC3); Mat roi1(final, Rect(frm1.cols, 0, frm1.cols, frm1.rows)); Mat roi2(final, Rect(2*frm1.cols, 0, frm2.cols, frm2.rows)); warpImage2.copyTo(roi2); frm1.copyTo(roi1); imshow("final", final); 

What else should I do to make stitching better?

Furthermore, is it wise to make the homography matrix motionless instead of calculating it? I mean specifying the angle and offset between the two cameras on my own in order to get a homography matrix that satisfies what I want.

Thanks.:)

+1
source share
2 answers

It seems you are going about it wisely, but if you have access to both cameras and they remain stationary relative to each other, then offline calibration and simple online conversion application will make your application more efficient.

It should be noted that you are using the findHomography function from OpenCV. From the documentation, this function:

 Finds a perspective transformation between two planes. 

However, your glasses are not limited to a specific plane, as they create a 3D scene. If you want to calibrate offline, you can draw a chessboard from both cameras, and the detected angles can be used in this function.

Alternatively, you may need to study the basic matrix, which can be calculated using a similar function . This matrix describes the relative position of the cameras, but some work (and a good tutorial) will be required to extract them.

If you can find it, I highly recommend taking a look at Part II: β€œTwo-Look Geometry” in Richard Hartley and Andrew Sisserman's book, Multidimensional Geometry in Computer Vision, which goes through the process in detail.

+1
source

Recently, I have been working on registering images. My algorithm takes two images, computes the SURF functions, finds matches, finds the homography matrix and then stitches both images together, I did this with the following code:

 void stich(Mat base, Mat target,Mat homography, Mat& panorama){ Mat corners1(1, 4,CV_32F); Mat corners2(1,4,CV_32F); Mat corners(1,4,CV_32F); vector<Mat> planes; /* compute corners of warped image */ corners1.at<float>(0,0)=0; corners2.at<float>(0,0)=0; corners1.at<float>(0,1)=0; corners2.at<float>(0,1)=target.rows; corners1.at<float>(0,2)=target.cols; corners2.at<float>(0,2)=0; corners1.at<float>(0,3)=target.cols; corners2.at<float>(0,3)=target.rows; planes.push_back(corners1); planes.push_back(corners2); merge(planes,corners); perspectiveTransform(corners, corners, homography); /* compute size of resulting image and allocate memory */ double x_start = min( min( (double)corners.at<Vec2f>(0,0)[0], (double)corners.at<Vec2f> (0,1)[0]),0.0); double x_end = max( max( (double)corners.at<Vec2f>(0,2)[0], (double)corners.at<Vec2f>(0,3)[0]), (double)base.cols); double y_start = min( min( (double)corners.at<Vec2f>(0,0)[1], (double)corners.at<Vec2f>(0,2)[1]), 0.0); double y_end = max( max( (double)corners.at<Vec2f>(0,1)[1], (double)corners.at<Vec2f>(0,3)[1]), (double)base.rows); /*Creating image with same channels, depth as target and proper size */ panorama.create(Size(x_end - x_start + 1, y_end - y_start + 1), target.depth()); planes.clear(); /*Planes should have same n.channels as target */ for (int i=0;i<target.channels();i++){ planes.push_back(panorama); } merge(planes,panorama); // create translation matrix in order to copy both images to correct places Mat T; T=Mat::zeros(3,3,CV_64F); T.at<double>(0,0)=1; T.at<double>(1,1)=1; T.at<double>(2,2)=1; T.at<double>(0,2)=-x_start; T.at<double>(1,2)=-y_start; // copy base image to correct position within output image warpPerspective(base, panorama, T,panorama.size(),INTER_LINEAR| CV_WARP_FILL_OUTLIERS); // change homography to take necessary translation into account gemm(T, homography,1,T,0,T); // warp second image and copy it to output image warpPerspective(target,panorama, T, panorama.size(),INTER_LINEAR); //tidy corners.release(); T.release(); } 

Any question I try

0
source

Source: https://habr.com/ru/post/920550/


All Articles