Android Camera2 API shows processed preview image

The new camera 2 API is very different from the old one. Trying to manipulate camera frames with the custom part of the pipeline confuses me. I know that there is a very good explanation in Processing camera preview data using the Android L and Camera2 API , but the frame display is still not clear. My question is, what is the way to display frames on the screen that came from the ImageReaders callback function after some processing, while maintaining efficiency and speed in the Camera2 api pipeline?

Stream example:

camera.add_target (imagereader.getsurface) β†’ the callback imagereaders does some processing β†’ (shows that the processed image is on the screen?)

Workaround: sending bitmaps to an image each time a new frame is processed.

+5
source share
1 answer

Change after clarification; original answer below

Depends on where you do your processing.

If you use RenderScript, you can connect the Surface from SurfaceView or TextureView to the distribution ( setSurface ), and then write the processed one, output to this selection and send it using Allocation.ioSend (). This HDR viewfinder uses this approach.

If you are processing based on EGL shaders, you can connect Surface to EGLSurface with eglCreateWindowSurface , and Surface as an argument to native_window. You can then display your final output on this EGLSurface, and when you call eglSwapBuffers, a buffer will be sent to the screen.

If you are doing your own processing, you can use the NDK ANativeWindow methods to write to the surface passed from Java and convert to ANativeWindow.

If you perform processing at the Java level, this is very slow, and you probably don't want that. But you can use the new Android M ImageWriter class or load the texture into EGL for each frame.

Or, as you say, draw every frame in the ImageView, but it will be slow.


Original answer:

If you are shooting JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() to byte[] , and then use BitmapFactory.decodeByteArray to convert it to a bitmap.

If you capture YUV_420_888 images, you need to write your own conversion code from the 3-plane YCbCr 4: 2: 0 format so that you can display, for example, int [] RGB values ​​to create a bitmap from; Unfortunately, there is no convenient API for this yet.

If you are shooting RAW_SENSOR images (Bayer raw sensor data), you need to do a lot of image processing or just save the DNG.

+11
source

Source: https://habr.com/ru/post/1232050/


All Articles