This is a kind of continuation of your previous questions about the difference between imresize in MATLAB and cv::resize in OpenCV, taking into account bicubic interpolation.
I was interested to know why there is a difference. These are my conclusions (since I understood the algorithms, please correct me if I am wrong).
Think of resizing an image as a planar conversion from an input image of size M-by-N to an output image of size scaledM-by-scaledN .
The problem is that the points do not necessarily overlap the discrete grid, therefore, to obtain the intensities of the pixels in the output image, we need to interpolate the values ββof some of the neighboring samples (usually performed in the reverse order, for each output pixel, we find the corresponding non-integer point in the input space and interpolate around it).
Here the interpolation algorithms differ by choosing the size of the neighborhood and the weights giving each point in that neighborhood. The connection can be the first or higher order (where the variable involved is the distance from the inverse of the displayed non-integer sample to the discrete points in the original image grid). Usually you assign higher weights to closer points.
Looking at imresize in MATLAB, here are the weight functions for linear and cubic kernels:
function f = triangle(x) % or simply: 1-abs(x) for x in [-1,1] f = (1+x) .* ((-1 <= x) & (x < 0)) + ... (1-x) .* ((0 <= x) & (x <= 1)); end function f = cubic(x) absx = abs(x); absx2 = absx.^2; absx3 = absx.^3; f = (1.5*absx3 - 2.5*absx2 + 1) .* (absx <= 1) + ... (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) .* ((1 < absx) & (absx <= 2)); end
(They basically return the interpolation weight of the sample based on how far it is from the interpolated point.)
Here's what these features look like:
>> subplot(121), ezplot(@triangle,[-2 2]) % triangle >> subplot(122), ezplot(@cubic,[-3 3]) % Mexican hat

Note that the linear kernel (piecewise-linear functions on the intervals [-1,0] and [0,1] and zeros elsewhere) works at 2-neighboring points, and the cubic core (piecewise-cubic functions on segments [-2, -1], [-1,1] and [1,2] and zeros elsewhere) work on 4 neighboring points.
Here is an illustration for the one-dimensional case, showing how to interpolate the value of x from discrete points f(x_k) using a cubic core:

The kernel function h(x) centered on x , where the point to be interpolated is located. The interpolated value f(x) is the weighted sum of discrete neighboring points (2 on the left and 2 on the right), scaled by the value of the interpolation function at these discrete points.
Let's say if the distance between x and the nearest point is d ( 0 <= d < 1 ), the interpolated value at location x will be:
f(x) = f(x1)*h(-d-1) + f(x2)*h(-d) + f(x3)*h(-d+1) + f(x4)*h(-d+2)
where the order of the points is shown below (note that x(k+1)-x(k) = 1 ):
x1 x2 x x3 x4 o--------o---+----o--------o \___/ distance d
Now, since the points are discrete and sampled at regular intervals, and the core width is usually small, interpolation can be summarized as a convolution operation:

The concept extends to 2 dimensions simply by first interpolating in one dimension and then interpolating in another dimension using the results of the previous step.
Here is an example of bilinear interpolation, which in 2D considers 4 neighboring points:

In bicubic interpolation in 2D, 16 neighboring points are used:

First, we interpolate row by row (red dots) using 16 grid samples (pink). Then we interpolate to a different size (red line) using the interpolated points from the previous step. At each step, the usual 1D interpolation is performed. This equation is too long and complicated for me to figure out manually!
Now, if we return to the cubic function in MATLAB, this actually corresponds to the convolution kernel definition shown in the reference document as equation (4). Here is what is taken from Wikipedia :

You can see that in the above definition, MATLAB chose a=-0.5 .
Now the difference between the implementation in MATLAB and OpenCV is that OpenCV chose a=-0.75 .
static inline void interpolateCubic( float x, float* coeffs ) { const float A = -0.75f; coeffs[0] = ((A*(x + 1) - 5*A)*(x + 1) + 8*A)*(x + 1) - 4*A; coeffs[1] = ((A + 2)*x - (A + 3))*x*x + 1; coeffs[2] = ((A + 2)*(1 - x) - (A + 3))*(1 - x)*(1 - x) + 1; coeffs[3] = 1.f - coeffs[0] - coeffs[1] - coeffs[2]; }
It may not be immediately right away, but the code calculates the members of the cubic convolution function (listed right after equation (25) in the article):

We can verify that using the Symbolic Math Toolbox:
A = -0.5; syms x c0 = ((A*(x + 1) - 5*A)*(x + 1) + 8*A)*(x + 1) - 4*A; c1 = ((A + 2)*x - (A + 3))*x*x + 1; c2 = ((A + 2)*(1 - x) - (A + 3))*(1 - x)*(1 - x) + 1; c3 = 1 - c0 - c1 - c2;
These expressions can be rewritten as:
>> expand([c0;c1;c2;c3]) ans = - x^3/2 + x^2 - x/2 (3*x^3)/2 - (5*x^2)/2 + 1 - (3*x^3)/2 + 2*x^2 + x/2 x^3/2 - x^2/2
which correspond to the terms from the above equation.
Obviously, the difference between MATLAB and OpenCV comes down to using a different meaning for the free word a . According to the authors of the article, the value 0.5 is preferred, since it implies better properties of the approximation error than any other choice for a .