How can I distinguish between NV21 and YV12 encoding in imageReader API 2 API?

I am developing a custom API application for camera 2, and I notice that the capture format conversion is different on some devices when I use the ImageReader callback.

For example, in Nexus 4 it doesn’t work fine, but in Nexus5X it looks normal, here is the conclusion.

enter image description here

I initialize ImageReader in this form:

mImageReader = ImageReader.newInstance(320, 240, ImageFormat.YUV_420_888,2); 

And my callback is the ImageReader callback callback.

  mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable( ImageReader reader) { try { mBackgroundHandler.post( new ImageController(reader.acquireNextImage()) ); } catch(Exception e) { //exception } } 

};

And in the case of Nexus 4: I had this error.

 D/qdgralloc: gralloc_lock_ycbcr: Invalid format passed: 0x32315659 

When I try to write a raw file on both devices, I have different images. Therefore, I understand that the Nexus 5X image is encoded in NV21, and the Nexus 4 is encoded in YV12. enter image description here

I found the image format specification, and I'm trying to get the format in ImageReader. There are options for YV12 and NV21, but obviously I get the format YUV_420_888 when I try to get the format.

  int test=mImageReader.getImageFormat(); 

So, is there a way to get the camera input format (NV21 or YV12) to distinguish between these types of encoding in the camera class? Perhaps the characteristics of the camera?

Thanks in advance.

Unai. PD: I use OpenGL to display in RGB images, and I use Opencv to convert to YUV_420_888.

+6
source share
1 answer

YUV_420_888 is a wrapper that can contain (among others) images of NV21 and YV12. If you need to use planes and steps to access individual colors, follow these steps:

 ByteBuffer Y = image.getPlanes()[0]; ByteBuffer U = image.getPlanes()[1]; ByteBuffer V = image.getPlanes()[2]; 

If the base pixels are in NV21 format (as on Nexus 4), pixelStride will be 2, and

 int getU(image, col, row) { return getPixel(image.getPlanes()[1], col/2, row/2); } int getPixel(plane, col, row) { return plane.getBuffer().get(col*plane.getPixelStride() + row*plane.getRowStride()); } 

We take half the column and half the row because this is how the U and V (color) planes are stored in 420 images.

This code is for illustration, it is very inefficient, you probably want to access the pixels in bulk using get(byte[], int, int) either through a fragment shader or through the JNI function GetDirectBufferAddress in your own code. What you cannot use is the plane.array() method, because direct byte buffers are guaranteed on the plane.

0
source

Source: https://habr.com/ru/post/1012445/


All Articles