At first part of me feels like this is a stupid question, sorry. Currently, the most accurate way to calculate the optimal scaling factor (the best width and height for calculating the target pixel while maintaining proportions) is repeated and chooses the best, but there should be a better way to do this.
Example:
import cv2, numpy as np img = cv2.imread("arnold.jpg") img.shape[1] # eg width = 700 img.shape[0] # eg height = 979 # eg Total pixels : 685,300 TARGET_PIXELS = 100000 MAX_FACTOR = 0.9 STEP_FACTOR = 0.001 iter_factor = STEP_FACTOR results = dict() while iter_factor < MAX_RATIO: img2 = cv2.resize(img, (0,0), fx=iter_factor, fy=iter_factor) results[img2.shape[0]*img2.shape[1]] = iter_factor iter_factor += step_factor best_pixels = min(results, key=lambda x:abs(x-TARGET_PIXELS)) best_ratio = results[best_pixels] print best_pixels # eg 99750 print best_ratio # eg 0.208
I know that in the above code there are some errors, i.e. there is no check in the results dictionary for the existing key, but I'm more interested in another approach that I canโt understand, this is the search for Lagrangian optimization, but it seems quite complicated for a simple task. Any ideas?
** CHANGE AFTER ANSWER **
Transition to providing a code if anyone is interested in an answer
import math, cv2, numpy as np
source share