Your code doesn't seem complete, so it's hard to say what the problem is.
In any case, the distorted image can have completely different sizes compared to the input image, so you will have to adjust the size parameter that you use for warpPerspective.
For example, try doubling the size:
cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));
Edit:
To ensure that the entire image is inside this image, all corners of the original image must be deformed inside the resulting image. Therefore, simply calculate the desired location for each of the corner points and adjust the destination points accordingly.
To make the code example clearer:
// calculate transformation cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints); // calculate warped position of all corners cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1); a = a * (1.0/az); cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1); b = b * (1.0/bz); cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1); c = c * (1.0/cz); cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1); d = d * (1.0/dz); // to make sure all corners are in the image, every position must be > (0, 0) float x = ceil(abs(min(min(ax, bx), min(cx, dx)))); float y = ceil(abs(min(min(ay, by), min(cy, dy)))); // and also < (width, height) float width = ceil(abs(max(max(ax, bx), max(cx, dx)))) + x; float height = ceil(abs(max(max(ay, by), max(cy, dy)))) + y; // adjust target points accordingly for (int i=0; i<4; i++) { points2D[i] += cv::Point2f(x,y); } // recalculate transformation M = cv::getPerspectiveTransform(points2D, imagePoints); // get result cv::Mat result; cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);