You create a dst mat with the same size as src . Also, when you call resize , you pass in both the size of the destination and the scale factors fx/fy , you have to pass one thing:
Mat src = imread(...); Mat dst; resize(src, dst, Size(), 2, 2, INTER_CUBIC); // upscale 2x // or resize(src, dst, Size(1024, 768), 0, 0, INTER_CUBIC); // resize to 1024x768 resolution
UPDATE: from the OpenCV documentation:
Scaling is simply resizing an image. OpenCV comes with the cv2.resize () function for this purpose. You can specify the size of the image manually, or you can specify the zoom factor. Another used interpolation methods. The preferred interpolation methods are cv2.INTER_AREA for compression and cv2.INTER_CUBIC (slow) & cv2.INTER_LINEAR for scaling. The default interpolation method is cv2.INTER_LINEAR for all sizing purposes. You can resize the input image of any of the following methods:
import cv2 import numpy as np img = cv2.imread('messi5.jpg') res = cv2.resize(img,None,fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
In addition, in Visual C++ I tried both methods for shortening, and cv::INTER_AREA is much faster than cv::INTER_CUBIC (as mentioned in the OpenCV documentation):
cv::Mat img_dst; cv::resize(img, img_dst, cv::Size(640, 480), 0, 0, cv::INTER_AREA); cv::namedWindow("Contours", CV_WINDOW_AUTOSIZE); cv::imshow("Contours", img_dst);