Can ffmpeg be written in temporary code?

I have a need to record video in a temporary code, and I wonder if this is possible for ffmpeg?

+12
c # ffmpeg video-processing timecodes
Jul 03 '10 at 1:13
source share
7 answers

The short answer is no.

The long answer is yes, but not without using a separate library to create frames with the time code displayed on them, with transparency filling the rest of the frame, and then using FFmpeg to overlay frames on an existing video. From head to toe I do not know how to do this, but I am sure that if you are creative, you can understand it.

Edit: I am working on this issue because it is an interesting question / project for me. I came a little further in the solution by writing a Perl script that will generate a .srt file with a temporary code built into it for any given video file from which FFmpeg is configured to read metadata. It uses the Video::FFmpeg to read the duration and saves the subtitle file as ${video}.srt . This will make it automatically appear in Mplayer if you enter the following lines in ~/.mplayer/config :

 # select subtitle files automatically in the current directory, all files # matching the basename of the current playing file sub-fuzziness=1 

Work continues on positioning and overlaying the displayed subtitles on the video and re-encoding in the same format. I will be updating this post as I know more.

+3
Jul 03 '10 at 1:17
source share

The FFMPEG drawing filter works for me, you specify the initial timecode and its format:

 -vf drawtext="fontsize=15:fontfile=/Library/Fonts/DroidSansMono.ttf:\ timecode='00\:00\:00\:00':rate=25:text='TCR\:':fontsize=72:fontcolor='white':\ boxcolor=0x000000AA:box=1:x=860-text_w/2:y=960" 

you need to specify the time code format in the form hh: mm: ss [:;,] ff. Note that you need to avoid colons in the time code format string, and you need to specify the time code speed (here 25 frames per second). You can also specify additional text - here it is "TCR:"

You can get the frame rate with ffprobe and a little fu shell:

 frame_rate=$(ffprobe -i "movie.mov" -show_streams 2>&1|grep fps|sed "s/.*, \([0-9.]*\) fps,.*/\1/") 

So, you can easily connect everything together in a batch processing script, for example

 for i in *.mov frame_rate=$(ffprobe -i "$i" -show_streams 2>&1|grep fps|sed "s/.*, \([0-9.]*\) fps,.*/\1/") clipname=${(basename "$i")/\.*/} ffmpeg -i "$i" -vcodec whatever -acodec whatever \ -vf drawtext="fontsize=15:fontfile=/Library/Fonts/DroidSansMono.ttf:\ timecode='00\:00\:00\:00':rate=$frame_rate:text='$clipname' TCR:':\ fontsize=72:fontcolor='white':boxcolor=0x000000AA:\ box=1:x=860-text_w/2:y=960" "${i/.mov/_tc.mov}" done 

This will add the name of the clip and roll the timecode in a translucent box in the lower center of the frame 1920x1080

Edit Since I came to the dark side, now I am doing this in a Windows Powershell environment, and this is what I use:

 ls -R -File -filter *.M*|%{ ffmpeg -n -i $_.fullname -vf drawtext="fontsize=72:x=12:y=12:` timecode='00\:00\:00\:00':rate=25:fontcolor='white':` boxcolor=0x000000AA:box=1" ` ("c:\path\to\destination\{0}" -F ($_.name -replace 'M[OPT][V4S]', 'mp4'))} 

This creates mp4s for the folder containing the .MOV, .MP4 and .MTS files (using the -filter command -filter it looks for files with * .M * in the name that you will need to change if you do. AVI), and it’s a bit more minimal, it just uses libx264 with default settings as the output codec and does not specify a font, etc. The time code in this case is burned in the upper left corner of the frame.

+21
Sep 03 '12 at 8:27
source share

The drawtext filter mentioned in @stib is the key to insert time. However, using the timecode option timecode not match the wall clock time. If you made a mistake in the r parameter (timecode_rate), then your time will not correspond to your playing time.

There are other options, for example, the option text='%{prt}' allows you to display elapsed time accurate to the microsecond. Team:

 ffmpeg -i video.mp4 -vf "drawtext=text='%{prt}'" output.mp4 

To get the watch instead, I had to use the legacy strftime option. This has an undocumented basetime option that can be used to set the start time in microseconds . Example, when I set the start time to 12:00 PM on December 1, 2013 (the part $(...) is the shell extension executed by the shell) and displays only the time (see the strftime manual for possible formats ):

 ffmpeg -i video.mp4 -vf "drawtext=expansion=strftime: \ basetime=$(date +%s -d'2013-12-01 12:00:00')000000: \ text='%H\\:%S\\:%S'" output.mp4 

\\: used to exit from : which would otherwise make sense as a parameter separator.

Another example: the command to insert the date + time in a black box, a few pixels in the upper left corner and "some filling" (in fact, two spaces and new lines at the edges):

 newline=$'\r' ffmpeg -i video.mp4 -vf "drawtext=x=8:y=8:box=1:fontcolor=white:boxcolor=black: \ expansion=strftime:basetime=$(date +%s -d'2013-12-01 12:00:00')000000: \ text='$newline %Y-%m-%d %H\\:%M\\:%S $newline'" output.mp4 

Another example to get microseconds below the clock:

 newline=$'\r' ffmpeg -i video.mp4 -vf "drawtext=expansion=strftime: \ basetime=$(date +%s -d'2013-12-01 12:00:00')000000: \ text='$newline %H\\:%M\\:%S $newline':fontcolor=white:box=1:boxcolor=black, \ drawtext=text='$newline %{pts} $newline': \ y=2*th/3:box=1:fontcolor=white:boxcolor=black:" output.mp4 

This exploits the fact that the text actually consists of three lines and that both texts have a new line (carriage return, ^M ) added and added. (Without this new line, space becomes empty)

Other tips:

  • -vf and -filter:v are equal.
  • You cannot specify filters several times, for example. -vf drawtext=text=1 -vf drawtext=text=2 will only output the second text. You can combine filters with a comma, as I showed above.
+12
Dec 01 '13 at 15:22
source share

in later builds, you can use the drawtext filter (the “t” in my examples is, I think, the timestamp of the frame) to insert text. It also works for srtftime and in the "current" system time.

+2
Aug 11 '12 at 5:17
source share

The simplest solution that I found is to show the clock when the file was captured, not its duration, and it works / is based on this post, thanks! / is

 D:\Temp\2>ffmpeg.exe -i test.avi -vf "drawtext=fontfile=C\\:/Windows/Fonts/arial.ttf:timecode='00\:20\:10\:00':rate=25:text='TCR\:':fontsize=46:fontcolor=white:x=500:y=50: box=1: boxcolor=0x00000000@1" -f mp4 textts.mp4 

So simple in timecode - set your start time, then the count goes well! Windows example

+1
Aug 09 '15 at 6:13
source share

Here is my solution, and I believe that it is correct, because it avoids the need to manually set the speed and allows you to format the output.

 ffmpeg -i test.mp4 -vf \ "drawtext=fontfile=arialbd.ttf:text='%{pts\:gmtime\:0\:%H\\\:%M\\\:%S}'" test.avi 

This creates a stamp in the format HH: MM: SS; you can change it to whatever you want using strftime .

It might seem like this will be timestamped with gmtime , but this is not what happens. It actually transfers the current video time, in seconds, to gmtime, producing a date that is 1/1/1970, and a time that, however, is many seconds after midnight. That way, you just drop the date part and use the time part.

Note the triple escaped colons inside the pts function, which you will need to do if you type them, as I did above. You can also see that I copied my font file for Arial Bold and dropped it directly into the working directory for simplicity.

+1
Aug 12 '17 at 1:41 on
source share

FFMPEG will be able to do most of the work, but will not be fully packaged. Using FFMPEG, you can decode all frames sequentially and provide you with a “Time Stamp” (additional time metadata may be available in some formats, but PTS is what you want to find first.) Then you can draw the text on the decoded frame yourself. I use Qt for similar things, using QPainter in QImage with frame data, but there may be another API to draw an image that you think is more obvious. Then use the FFMPEG API to create a compressed video that has your recently drawn frames. It will be a little trickier if you also want audio. My own work doesn’t really care about the sound, so I didn’t bother to study the audio aspects of the API. Basically, since you are doing your reading cycle by getting packets from a file, some of them will be audio. Instead of dropping them like me, you need to save them and write them to the output file as you receive them.

I used the C API, not C #, so I don’t know if there are any special problems you are worried about.

-one
Jul 13 '10 at 2:10
source share



All Articles