How to convert android.media.Image to a raster object?

In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ in this tutorial. But now I want to iterate over the pixel values, does anyone know how I can do this? Should I convert it to something else and how can I do it?

thanks

+17
source share
4 answers

If you want all the loops throughout the pixel, you need to first convert it to a Bitmap object. Now that I see in the source code that it returns Image , you can directly convert the bytes to a bitmap.

  Image image = reader.acquireLatestImage(); ByteBuffer buffer = image.getPlanes()[0].getBuffer(); byte[] bytes = new byte[buffer.capacity()]; buffer.get(bytes); Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null); 

Then, once you get the bitmap object, you can now iterate over all the pixels.

+15
source

https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[†%29

According to the Java documentation: the buffer.get method passes bytes from this buffer to the specified destination array. A call to this method in the form of src.get (a) behaves exactly like a call

  src.get(a, 0, a.length) 
0
source

In fact, you have two questions in one: 1) How do you loop about throwing android.media.Image pixels 2) How do you convert android.media.image to a bitmap

1st easy. Note that the Image object that you get from the camera is just a YUV frame, where the components Y and U + V are in different planes. In many image processing cases, you only need the Y plane, that is, the gray part of the image. To get it, I suggest the following code:

  Image.Plane[] planes = image.getPlanes(); int yRowStride = planes[0].getRowStride(); byte[] yImage = new byte[yRowStride]; planes[0].getBuffer().get(yImage); 

The yImage byte buffer is actually the gray pixels of the frame. In the same way, you can get U + V details. Note that they can be first U, and V after, or V and after him U, and are possibly interchangeable (this is a typical case with the Camera2 API). So you get UVUV ....

For debugging purposes, I often write a frame to a file and try to open it using the Vooya (Linux) application to check the format.

2nd question is a bit more complicated. To get the Bitmap object, I found sample code from the TensorFlow project here . The most interesting function for you is "convertImageToBitmap", which will return you RGB values.

To convert them to a real bitmap, do the following:

  Bitmap rgbFrameBitmap; int[] cachedRgbBytes; cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes); rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888); rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight()); 

Note. There are more options for converting YUV to RGB frames, so if you need a pixel value, a bitmap might not be the best choice as it may consume more memory than you need to just get the RGB values.

0
source

1-Save the path to the image file as a string variable. To decode the contents of an image file, you need the file path stored in your code as a string. Use the following syntax as a guide:

 String picPath = "/mnt/sdcard/Pictures/mypic.jpg"; 

2 - Create a raster object and use BitmapFactory:

 Bitmap picBitmap; Bitmap picBitmap = BitmapFactory.decodeFile(picPath); 
0
source

Source: https://habr.com/ru/post/1263128/


All Articles