What is the best image scaling algorithm (in terms of quality)?

I want to know which algorithm is the best that can be used to reduce the bitmap. With the best, I mean the one that gives the most beautiful results. I know about bicubic, but is there anything else? For example, I heard from some people that Adobe Lightroom has some kind of proprietary algorithm that gives better results than the standard bicubic that I used. Unfortunately, I would like to use this algorithm on my own in my software, so Adobe will not make carefully guarded trade secrets.

Added:

I checked Paint.NET and, to my surprise, it seems that Super Sampling is better than bicubic when reducing image size. This makes me wonder if interpolation algorithms can even go.

He also reminded me of an algorithm that I “invented” myself, but was never implemented. I suppose he also has a name (somehow this trivial cannot be my thought), but I could not find it among the popular ones. Supersampling was the closest.

The idea is that for each pixel in the target image, calculate where it will be in the original image. It will probably overlay one or more other pixels. Then it would be possible to calculate the area and color of these pixels. Then, to get the color of the target pixel, you can simply calculate the average of these colors by adding their areas as “weights”. So, if the target pixel would cover 1/3 of the yellow pixel of the source and 1/4 of the green pixel of the source, I would get (1/3 * yellow + 1/4 * green) / (1/3 + 1/4).

This, of course, will be intensively calculated, but it should be as close to the ideal as possible, no?

Is there a name for this algorithm?

+46
algorithm image resize
Dec 21 '08 at 21:40
source share
6 answers

Unfortunately, I can’t find a link to the original survey, but since Hollywood filmmakers switched from film to digital images, this question arose a lot, so someone (maybe SMPTE, maybe ASC) gathered a bunch of professional filmmakers and showed them frames that have been rescaled using a bunch of different algorithms. The results were that for these professionals watching huge movies, the consensus was that Mitchell (also known as high-quality Catmull-Rom) is best for scaling and sinc is best for downsizing. But sinc is a theoretical filter that goes to infinity and therefore cannot be fully implemented, so I don’t know what they really meant by “sinc”. This probably refers to a truncated version of sinc. Lanczos is one of several practical sinc options that only tries to improve its truncation and is probably the best default choice to reduce still images. But, as usual, it depends on the image and what you want: reducing the line pattern to save lines is, for example, a case where you may prefer to save edges that would not be desirable when compressing a photograph of flowers.

There is a good example of the results of various algorithms in Cambridge in Color .

People at fxguide collect a lot of information on scaling algorithms (along with many other materials about layout and other image processing) that deserve a look at. They also include test images that can be useful in conducting your own tests.

ImageMagick now has an extensive guide to resampling filters if you really want to go into it.

Paradoxically, there is more debate about image reduction, which theoretically can be perfect, since you only discard information than there is about extension when you try to add information that does not exist. But start with Lanzos.

+51
May 30 '11 at 2:49 a.m.
source share

There is a sample of Lanczos , which is slower than bicubic, but produces higher quality images.

+16
Dec 21 '08 at 21:48
source share

(Bi-) linear and (bi-) cubic resampling is not only ugly, but terribly wrong when you scale down by a factor of less than 1/2. They will lead to very poor anti-aliasing, similar to what you would get if you were dropped with a factor of 1/2, and then used sampling of the nearest neighbors.

Personally, I would recommend (area-) sample averaging for most downsampling tasks. It is very simple and fast and almost optimal. Gaussian oversampling (with a radius that is proportional to the inverse coefficient, such as a radius of 5 for downsampling by 1/5), can give better results with less computational overhead and sounds more mathematical.

One of the possible reasons for using Gaussian oversampling is that, unlike most other algorithms, it works correctly (does not introduce artifacts / anti-aliasing) both for upsampling and downsampling if you choose a radius corresponding to the resampling factor . Otherwise, to support both directions, you will need two separate algorithms: averaging the area for downsampling (which will degrade to the nearest neighbor to increase the sampling rate) and something like a (bi-) cube to increase the sampling rate (which would degrade to the nearest neighbor for downsampling). One way to see the mathematical mathematical expression of this good property of Gaussian oversampling is that Gaussian with a very large radius approximates area averaging, and Gaussian with a very small radius approximates (bi-) linear interpolation.

+11
Oct 24 '10 at 4:23
source share

I recently saw an article on Slashdot about Seam Carving , which might be worth a look.

Suture cutting is an image resizing algorithm developed by Shay Avidan and Ariel Shamir. This algorithm does not resize the image by scaling or cropping, but by judiciously removing pixels from (or adding pixels to) the image, which are of little importance.

+7
Dec 21 '08 at 22:20
source share

The algorithm that you describe is called linear interpolation and is one of the fastest algorithms, but not the best in images.

+3
Sep 18 '09 at 17:55
source share

Is there a name for this algorithm?

In literature, it can be reprinted as a “box” or “window”. Do you think this is actually less computational cost.

It can also be used to create an intermediate bitmap image, which is subsequently used by bi-cubic interpolation to avoid smoothing when sampling is reduced by more than 1/2.

0
Jun 08 2018-11-11T00:
source share



All Articles