Maximum camera and display capture performance

I knew a lot about this topic, but it seems that I'm doing something wrong or my understanding somehow does not work.

I just want to achieve the best performance (for example, measured in FPS) for capturing high-quality images from my Android smartphone and displaying it directly to the user without any changes.

Since I have a pretty good smartphone (Nexus 4), I suggested that this should be a trivial task. But all my efforts were not justified. For instance. I achieved only 10 FPS for the 800x480 stream using the latest OpenCV infrastructure.

So, is it possible to achieve> 25FPS to simply capture and display high-quality video from my phone camera? If so, what is the best strategy for this? Special device considerations are also welcome.

Resources I used:

Update: I was able to increase capture performance to almost ~ 25FPS @ 1280x720 by simply setting a hint when using SurfaceTexture and TextureView as a camera.

However, I am wondering if it is possible to increase performance even further. I tried to use different preview formats, but with no luck. Perhaps there is an implicit upper limit applied to capture performance that I don't know about.

However, I will continue to research and inform you. All kinds of information are still welcome!

+4
source share
1 answer

The limiting factor is how fast you can move a large pile of data. There is a similar discussion here . You have three tasks: (1) get the image, (2) save the image ("capture") and (3) display the image. (If I misunderstood your question, and you don’t need # 2, then the Surface camera’s preview mode will do what you want at high speed.)

One approach to efficient use of bandwidth available on Android 4.3 is to feed a Surface camera preview to an AVC encoder, save the encoded MPEG stream, and then decode the frames for display. Buffers from the camera can be fed to the MediaCodec encoder without the need to copy them or convert the data to another format. (See the CameraToMpegTest example.) This approach may not be compatible with one of your stated goals: the compression applied to each frame may lower the quality below acceptable levels.

If you need to save the whole frame, you need to copy the data, perhaps several times, and write it to disk. The larger the frame, the more data you need to move, and the slower everything. For example, a camera captures data and writes it to its own buffer; the native buffer is copied to the managed buffer for the Dalvik VM; the buffer is written to disk; YUV is converted to RGB; RGB is displayed on the screen by loading data into a texture and rendering.

+2
source

Source: https://habr.com/ru/post/1500607/


All Articles