In fact, you have two questions in one: 1) How do you loop about throwing android.media.Image pixels 2) How do you convert android.media.image to a bitmap
1st easy. Note that the Image object that you get from the camera is just a YUV frame, where the components Y and U + V are in different planes. In many image processing cases, you only need the Y plane, that is, the gray part of the image. To get it, I suggest the following code:
Image.Plane[] planes = image.getPlanes(); int yRowStride = planes[0].getRowStride(); byte[] yImage = new byte[yRowStride]; planes[0].getBuffer().get(yImage);
The yImage byte buffer is actually the gray pixels of the frame. In the same way, you can get U + V details. Note that they can be first U, and V after, or V and after him U, and are possibly interchangeable (this is a typical case with the Camera2 API). So you get UVUV ....
For debugging purposes, I often write a frame to a file and try to open it using the Vooya (Linux) application to check the format.
2nd question is a bit more complicated. To get the Bitmap object, I found sample code from the TensorFlow project here . The most interesting function for you is "convertImageToBitmap", which will return you RGB values.
To convert them to a real bitmap, do the following:
Bitmap rgbFrameBitmap; int[] cachedRgbBytes; cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes); rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888); rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
Note. There are more options for converting YUV to RGB frames, so if you need a pixel value, a bitmap might not be the best choice as it may consume more memory than you need to just get the RGB values.