Change after clarification; original answer below
Depends on where you do your processing.
If you use RenderScript, you can connect the Surface from SurfaceView or TextureView to the distribution ( setSurface ), and then write the processed one, output to this selection and send it using Allocation.ioSend (). This HDR viewfinder uses this approach.
If you are processing based on EGL shaders, you can connect Surface to EGLSurface with eglCreateWindowSurface , and Surface as an argument to native_window. You can then display your final output on this EGLSurface, and when you call eglSwapBuffers, a buffer will be sent to the screen.
If you are doing your own processing, you can use the NDK ANativeWindow methods to write to the surface passed from Java and convert to ANativeWindow.
If you perform processing at the Java level, this is very slow, and you probably don't want that. But you can use the new Android M ImageWriter class or load the texture into EGL for each frame.
Or, as you say, draw every frame in the ImageView, but it will be slow.
Original answer:
If you are shooting JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() to byte[] , and then use BitmapFactory.decodeByteArray to convert it to a bitmap.
If you capture YUV_420_888 images, you need to write your own conversion code from the 3-plane YCbCr 4: 2: 0 format so that you can display, for example, int [] RGB values ββto create a bitmap from; Unfortunately, there is no convenient API for this yet.
If you are shooting RAW_SENSOR images (Bayer raw sensor data), you need to do a lot of image processing or just save the DNG.