Using renderscript for processing and mediacodec for coding

I am trying to develop a camera application that does some video processing before recording a video. I decided to use Rendrscript for processing, as it provides many of the operations that I want to use. And I want to use MediaCodec for encoding. I found several examples (including Grafika ) that show how to use GLES for processing, but I did not find a sample that shows how to do this with RenderScript. Trying to replace GLES with Renderscript I have the following questions:

  • I am creating RenderScript Distribution output from an Encoder input surface. In the Grafika EGL example, swapbuffer () is used to send a buffer to the encoder. Does Allocation.ioSend () support the same thing?
  • In EGL, setPresentationTime () is used to set the timestamp. How to set timestamp in Renderscript Distribution ?
  • Should I use MediaCodec.queueInputBuffer () instead of sending input buffer and timestamp? In this case, should I still call Allocation.ioSend () before calling queueInputBuffer?
+6
source share
1 answer

I ran into the same problem, and the solution I'm using is to set the timestamp via EGL, like RecordFBOActivity # doFrame . To do this, an intermediate distribution is used to bridge the gap between RenderScript and OpenGL / EGL.

Let you view the data stream as a processing pipeline with steps.

Original pipeline

[Camera] --> [ImageAllocation] --> [RenderScript] --> [MediaCodecSurfaceAllocationForEncoder] --> [MediaCodec] 

In the source pipeline, all buffers are RS allocations.

MediaCodecSurfaceAllocation based on the Surface returned from the encoder, i.e. MediaCodec#getSurface() .

New pipeline

 [Camera] --> [ImageAllocation] --> [RenderScript] --> [IntermediateAllocation] --> [EglWindowSurfaceForEncoder] --> [MediaCodec] 

There are two big changes in the new pipeline: IntermediateAllocation and EglWindowSurfaceForEncoder

IntermediateAllocation is a SurfaceTexture-based distribution, similar to the flickering glitter of a screen texture used in CameraCaptureActivity .

EglWindowSurfaceForEncoder wraps an encoder input surface similar to RecordFBOActivity # startEncoder

The key point here is setting your own OnFrameAvailableListener .

Installation code

 void setup() { mEglWindowSurfaceForEncoder= new WindowSurface(mEglCore, encoderCore.getInputSurface(), true); mFullScreen = new FullFrameRect( new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT)); mTextureId = mFullScreen.createTextureObject(); mSurfaceTexture = new SurfaceTexture(mTextureId); Type renderType = new Type.Builder(renderScript, Element.RGBA_8888(renderScript)) .setX(width) .setY(height) .create(); mIntermediateAllocation = Allocation.createTyped( renderScript, renderType, Allocation.USAGE_SCRIPT | Allocation.USAGE_IO_OUTPUT); mIntermediateAllocation .setSurface(surface); mAllocationFromCamera = ... } 

OnNewCameraImage

 mIntermediateAllocation.copyFrom(mAllocationFromCamera); 

OnFrameAvailableListener

 mSurfaceTexture.setOnFrameAvailableListener( new SurfaceTexture.OnFrameAvailableListener() { public void onFrameAvailableListener(SurfaceTexture) { //latch the image data from camera mSurfaceTexture.updateTexImage(); // Draw the frame. mSurfaceTexture.getTransformMatrix(mSTMatrix); mFullScreen.drawFrame(mTextureId, mSTMatrix); // latch frame to encoder input mEglWindowSurfaceForEncoder.setPresentationTimes(timestampNanos); mEglWindowSurfaceForEncoder.swapBuffers(); } } } 

The above code should be executed in the context of the EGL (i.e. in the OpenGl rendering thread).

+1
source

Source: https://habr.com/ru/post/981643/


All Articles