OpenCV: wrapPerspective for the whole image

I find markers on images shot by my iPad. Because of this, I want to calculate the translations and rotations between them, I want to change the perspective of changing images on the images of these images, so it will look like I grab them directly above the markers.

I'm using now

points2D.push_back(cv::Point2f(0, 0)); points2D.push_back(cv::Point2f(50, 0)); points2D.push_back(cv::Point2f(50, 50)); points2D.push_back(cv::Point2f(0, 50)); Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints); cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows)); 

Which gives my results (look at the bottom right for the warpPerspective result):

photo 1photo 2photo 3

As you probably see, the result image contains a recognized marker in the upper left corner of the result image. My problem is that I want to capture the whole image (without cropping) in order to subsequently detect other marks in this image.

How can i do this? Maybe I should use rotation / translation vectors from solvePnP ?

EDIT:

Undoubtedly, resizing a distorted image does not help much, because the image is still translated, so the upper left corner of the marker is in the upper left corner of the image.

For example, when I doubled the size using:

 cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows)); 

I got these images:

photo 4photo 5

+6
source share
1 answer

Your code doesn't seem complete, so it's hard to say what the problem is.

In any case, the distorted image can have completely different sizes compared to the input image, so you will have to adjust the size parameter that you use for warpPerspective.

For example, try doubling the size:

 cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows)); 

Edit:

To ensure that the entire image is inside this image, all corners of the original image must be deformed inside the resulting image. Therefore, simply calculate the desired location for each of the corner points and adjust the destination points accordingly.

To make the code example clearer:

 // calculate transformation cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints); // calculate warped position of all corners cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1); a = a * (1.0/az); cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1); b = b * (1.0/bz); cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1); c = c * (1.0/cz); cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1); d = d * (1.0/dz); // to make sure all corners are in the image, every position must be > (0, 0) float x = ceil(abs(min(min(ax, bx), min(cx, dx)))); float y = ceil(abs(min(min(ay, by), min(cy, dy)))); // and also < (width, height) float width = ceil(abs(max(max(ax, bx), max(cx, dx)))) + x; float height = ceil(abs(max(max(ay, by), max(cy, dy)))) + y; // adjust target points accordingly for (int i=0; i<4; i++) { points2D[i] += cv::Point2f(x,y); } // recalculate transformation M = cv::getPerspectiveTransform(points2D, imagePoints); // get result cv::Mat result; cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP); 
+3
source

Source: https://habr.com/ru/post/957087/


All Articles