I try to use MediaCodec and MediaMux, and I am encountering some problems.
Here are the errors from logcat:
12-13 11:59:58.238: E/AndroidRuntime(23218): FATAL EXCEPTION: main 12-13 11:59:58.238: E/AndroidRuntime(23218): java.lang.RuntimeException: Unable to resume activity {com.brendon.cameratompeg/com.brendon.cameratompeg.CameraToMpeg}: java.lang.IllegalStateException: Can't stop due to wrong state. 12-13 11:59:58.238: E/AndroidRuntime(23218): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2918)
The code is erroneous in "mStManager.awaitNewImage ();", which is in the onResume function. And the logarithm says "camera frame timeout."
mStManager is an instance of the SurfaceTextureManager class. And the "camera frame timeout" comes from the awaitNewImage () function. I have added this class to my post.
Part of my code looks like this (onCreate function and onResume function):
@Override protected void onCreate(Bundle savedInstanceState) { // arbitrary but popular values int encWidth = 640; int encHeight = 480; int encBitRate = 6000000; // Mbps Log.d(TAG, MIME_TYPE + " output " + encWidth + "x" + encHeight + " @" + encBitRate); super.onCreate(savedInstanceState); setContentView(R.layout.activity_camera_to_mpeg); prepareCamera(encWidth, encHeight); prepareEncoder(encWidth, encHeight, encBitRate); mInputSurface.makeCurrent(); prepareSurfaceTexture(); mCamera.startPreview(); } @Override public void onResume(){ try { long startWhen = System.nanoTime(); long desiredEnd = startWhen + DURATION_SEC * 1000000000L; SurfaceTexture st = mStManager.getSurfaceTexture(); int frameCount = 0; while (System.nanoTime() < desiredEnd) { // Feed any pending encoder output into the muxer. drainEncoder(false); // Switch up the colors every 15 frames. Besides demonstrating the use of // fragment shaders for video editing, this provides a visual indication of // the frame rate: if the camera is capturing at 15fps, the colors will change // once per second. if ((frameCount % 15) == 0) { String fragmentShader = null; if ((frameCount & 0x01) != 0) { fragmentShader = SWAPPED_FRAGMENT_SHADER; } mStManager.changeFragmentShader(fragmentShader); } frameCount++; // Acquire a new frame of input, and render it to the Surface. If we had a // GLSurfaceView we could switch EGL contexts and call drawImage() a second // time to render it on screen. The texture can be shared between contexts by // passing the GLSurfaceView EGLContext as eglCreateContext() share_context // argument. mStManager.awaitNewImage(); mStManager.drawImage(); // Set the presentation time stamp from the SurfaceTexture time stamp. This // will be used by MediaMuxer to set the PTS in the video. if (VERBOSE) { Log.d(TAG, "present: " + ((st.getTimestamp() - startWhen) / 1000000.0) + "ms"); } mInputSurface.setPresentationTime(st.getTimestamp()); // Submit it to the encoder. The eglSwapBuffers call will block if the input // is full, which would be bad if it stayed full until we dequeued an output // buffer (which we can't do, since we're stuck here). So long as we fully drain // the encoder before supplying additional input, the system guarantees that we // can supply another frame without blocking. if (VERBOSE) Log.d(TAG, "sending frame to encoder"); mInputSurface.swapBuffers(); } // send end-of-stream to encoder, and drain remaining output drainEncoder(true); } catch(Exception e) { Log.d(TAG, e.getMessage()); // release everything we grabbed releaseCamera(); releaseEncoder(); releaseSurfaceTexture(); } }
the class in the code that relates to the error
private static class SurfaceTextureManager implements SurfaceTexture.OnFrameAvailableListener { private SurfaceTexture mSurfaceTexture; private CameraToMpeg.STextureRender mTextureRender; private Object mFrameSyncObject = new Object();
Does anyone have any ideas? Thanks!
source share