As a rule, you do not save the YUV image as a file, and there are no built-in functions for this. Moreover, there is no standard image format encoding for such YUV data. YUV is usually an intermediate form of data convenient for the camera conveyor and subsequent conversion to other formats.
If you really insist on it, you can write buffers for three channels as unregistered byte data to a file, and then open them in another place and restore. Be sure to save other important information, for example, step data. That's what I'm doing. Below are the relevant lines from the file format switch instruction I am using, as well as comments on the arguments:
File file = new File(SAVE_DIR, mFilename); FileOutputStream output = null; ByteBuffer buffer; byte[] bytes; boolean success = false; switch (mImage.getFormat()){ (... other image data format cases ...) // YUV_420_888 images are saved in a format of our own devising. First write out the // information necessary to reconstruct the image, all as ints: width, height, U-,V-plane // pixel strides, and U-,V-plane row strides. (Y-plane will have pixel-stride 1 always.) // Then directly place the three planes of byte data, uncompressed. // // Note the YUV_420_888 format does not guarantee the last pixel makes it in these planes, // so some cases are necessary at the decoding end, based on the number of bytes present. // An alternative would be to also encode, prior to each plane of bytes, how many bytes are // in the following plane. Perhaps in the future. case ImageFormat.YUV_420_888: // "prebuffer" simply contains the meta information about the following planes. ByteBuffer prebuffer = ByteBuffer.allocate(16); prebuffer.putInt(mImage.getWidth()) .putInt(mImage.getHeight()) .putInt(mImage.getPlanes()[1].getPixelStride()) .putInt(mImage.getPlanes()[1].getRowStride()); try { output = new FileOutputStream(file); output.write(prebuffer.array()); // write meta information to file // Now write the actual planes. for (int i = 0; i<3; i++){ buffer = mImage.getPlanes()[i].getBuffer(); bytes = new byte[buffer.remaining()]; // makes byte array large enough to hold image buffer.get(bytes); // copies image from buffer to byte array output.write(bytes); // write the byte array to file } success = true; } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { Log.v(appFragment.APP_TAG,"Closing image to free buffer."); mImage.close(); // close this to free up buffer for other images if (null != output) { try { output.close(); } catch (IOException e) { e.printStackTrace(); } } } break; }
Since the device can exactly match how the data is interleaved, it can be difficult to extract channels Y, U, V from this encoded information. To see the MATLAB implementation, see how to read and extract a file like this.
source share