How to convert YUV422 (subsample) to YUV?

I encode a video codec using a JPEG compression technique for each frame. So far, I have already encoded YUV, DCT and quantized DCT (encoding and decoding). I have already encoded the encoding YUV422, but I do not understand how to do the opposite (decoding).

To calculate my YUV for each pixel, I used the following equations:

Coding:

Y = 0.299 * R + 0.587 * G + 0.114 * B U = -0.1687 * R - 0.4187 * G + 0.5 * B + 128 V = 0.5 * R - 0.4187 * G - 0.0813 * B + 128 

Decoding:

 R = Y + 1.402 * (V - 128) G = Y - 0.34414 * (U - 128) - 0.71414 * (V - 128) B = Y + 1.772 * (U - 128) 

These equations do the perfect job.

Now, to perform the encoding with downsampling, I take my image encoded in YUV, and I calculate the sum of two adjacent pixels, and I divide the result by 2. The result is set for 2 pixels.

Example:

For simplicity, I'll take a pixel value from 0 to 255 (not using RGB components).

Below: 2 examples with the same result.

 Pixel_1 = 15, Pixel_2 = 5 -> (Pixel_1 + Pixel_2) / 2 = 10 Pixel_3 = 10, Pixel_4 = 10 -> (Pixel_3 + Pixel_4) / 2 = 10 

If I apply this equation to all the pixels of my YUV image, I get a new image, but this time is encoded in the YUV422 subsample.

It is so interesting how I can return a YUV image from a YUV422 image. My example just above shows that it is impossible to return the original YUV image, because there are many combinations that lead to the same result (here 10). However, I think there is a way to get, give or take several, the same original YUV pixel values. Can anyone help me please? I am really lost. Many thanks for your help.

+4
source share
1 answer

So pixels are placed for 4: 2: 0 and 4: 2: 2 (usually)

420

422

This is the correct way to interpolate color between 4: 2: 2 and 4: 2: 0 (brightness is already at the correct resolution)

The code can be downloaded from http://www.mpeg.org/MPEG/video/mssg-free-mpeg-software.html The code below is from readpic.c

 /* vertical filter and 2:1 subsampling */ static void conv422to420(src,dst) unsigned char *src, *dst; { int w, i, j, jm6, jm5, jm4, jm3, jm2, jm1; int jp1, jp2, jp3, jp4, jp5, jp6; w = width>>1; if (prog_frame) { /* intra frame */ for (i=0; i<w; i++) { for (j=0; j<height; j+=2) { jm5 = (j<5) ? 0 : j-5; jm4 = (j<4) ? 0 : j-4; jm3 = (j<3) ? 0 : j-3; jm2 = (j<2) ? 0 : j-2; jm1 = (j<1) ? 0 : j-1; jp1 = (j<height-1) ? j+1 : height-1; jp2 = (j<height-2) ? j+2 : height-1; jp3 = (j<height-3) ? j+3 : height-1; jp4 = (j<height-4) ? j+4 : height-1; jp5 = (j<height-5) ? j+5 : height-1; jp6 = (j<height-5) ? j+6 : height-1; /* FIR filter with 0.5 sample interval phase shift */ dst[w*(j>>1)] = clp[(int)(228*(src[w*j]+src[w*jp1]) +70*(src[w*jm1]+src[w*jp2]) -37*(src[w*jm2]+src[w*jp3]) -21*(src[w*jm3]+src[w*jp4]) +11*(src[w*jm4]+src[w*jp5]) + 5*(src[w*jm5]+src[w*jp6])+256)>>9]; } src++; dst++; } } } 

Hope this helps.

+8
source

Source: https://habr.com/ru/post/1489447/


All Articles