Firstly, thanks to this forum for giving me (as a long time lurker) with so much help in my first application (SwatchMatic). I am trying to do something more difficult for my second, and the first time I ask a question.
As mentioned elsewhere in this forum, MediaDataRetriever cannot count on getting bitmaps from video in fine-grained steps, even if microseconds are used, and even if OPTION CLOSET is used (to get all frames, not just key / synchronized frames) .
So, something like this will return a lot of duplicate frames - one unlike the ~ 30 that you would expect from the second video (on my Samsung test device anyway), and Samsung distributions seem to be worse providing all the frames that are great from other manufacturers from what I read on other forums ).
for (int i = 0; i < 1000000 ; i = i + (1000000/30)){ Bitmap myBitmap = retriever.getFrameAtTime(i, android.media.MediaMetadataRetriever.OPTION_CLOSEST); ... }
A typical answer is to use ffmpeg, but itβs such a bloated solution to get the very limited functionality that I need (several 30-60 bitmap sequences from mpeg-4 encoded by vido), and this opens up to me having to distribute LGPL code with my application etc.
I also read on this forum that there is no easy way to get frames from VideoView (which is a SurfaceView chip). Some Google employees have actually answered this question here .
But what about ThumbnailUtils createVideoThumbnail? I could live with the MINI_KIND resolution, which it can deliver, but it seems that it only delivers the middle frame (in time) of the clip. Can it be redefined / expanded in some way to create frames at a specific time?
source share