Stephen Dixon
Forum Replies Created
-
You’ll probably want to look at a shell script for that.
What environment are you in? In a bash shell (eg on a linux or OSX machine) you might want to use a “for” loop like:
for (( i=0; i<3600; i+=180 ))
do ffmpeg -ss $i -t 10 -acodec copy -vcodec copy -i input.mov output_$i.mov
doneThat will make separate movies however. You would have to use ffmpeg to join them all together if you wanted one movie as per this page.
-
Hmm. Post the results you get on the command line
-
Don’t forget ffmbc too – another fork, aimed at professional and broadcast users. Baptiste Courdurier maintains it.
https://code.google.com/p/ffmbc/
-
Weird. You could try using libavfilter to do it:
-vf scale="320:180"Stephen Dixon
Editor, Animator, Motionographer
Museum Victoria -
Included in the source for ffmpeg is a tools folder which contains a program called qt-faststart. That will rearrange the file to put the info atom before the data atom, which is I believe what you need to do. You can also get Windows and OSX binaries for it from the developer here, or download the source from ffmpeg’s git repository here
-
actually I downloaded it and had a look, it does have x264 and vpx, but I’m not sure if the presets are all there. See how you go.
the link for ffmbc is https://code.google.com/p/ffmbc/
-
I’m not sure how that version was compiled (I don’t have a working windows box at the moment), but for x264 and webm you need to have a version of ffmpeg that was compiled with the libx264 and libvpx (? or something like that) libraries. ffmpeg.org is the best place to go for the source code and instructions on how to compile, if you can get past it being a bit gruff and nerdy.
To find out whether your version has x264 and vpx capabilities try
ffmpeg -versionthat will tell you all the things it was compiled with.Another thing worth checking out is ffmbc, a “professional” fork of the ffmpeg project, which is better for a production setting, and supports things like dnxhd and apple prores out of the box.
Stephen Dixon
Editor, Animator, Motionographer
Museum Victoria -
1. depends on what you are using to play the video. The bad news is that you will probably have to export to at least two formats, possibly three. There’s some good info at https://camendesign.com/code/video_for_everybody and https://diveintohtml5.info/detect.html#video among others.
2. Don’t bother with flv, Flash can play mp4 video. There are several good flash players that can be used with HTML as fallback and they’ll all take an mp4 file. There are links on the Video For Everybody page I mentioned above. But the way things are, you’ll need at least two of mp4, ogv and webm. Thanks to Google, Apple and Microsoft for making things so frikkin difficult.Here’s what I use to generate the above for web display:
h264/x264/mp4 whatever you want to call it
For h.264 I always use
-vcodec lbx264 -preset slow -profile main -crf 20 -acodec libfaac -ab 128k
Change the crf and ab settings to suit your idea of quality vs size (crf is inversely related to the quality 0 is best higher numbers mean worse quality / smaller file sizes). I use the main profile and slow preset because it seems to be most compatible with all the gadgets and gizmos that people play video on these days. It won’t give you as much bang for your bit as using higher profiles and slower encodes but more people will actually be able to view it. And using the presets is way easier than hand rolling some arcane compression tweakage.ogv
-vcodec libtheora -qscale 6 -acodec libvorbis -ab 128k
again adjust the quality till it suits you. Qscale is the quality setting in this case; low is bad / small high is good / large.webm
-vcodec libvpx -qscale 6 -acodec libvorbis -ab 128k
just like ogv. There aren’t anywhere near as many knobs and buttons to twiddle with webm and ogv as there are with libx264.
3. ffmpeg is able to automatically recognise the format of the input video, so don’t fret.
ffmpeg -i [your video] [encoding options as specified above] [your output video with appropriate extension - eg output.mp4, output.ogv or output.webm]
and that’s all. Magick!You might also want to add filters to do things like scale the video to an appropriate size, eg:
-vf scale="480:360"(scale to 480x360px)
or
-vf scale="480:-1"(scale to 480px wide and whatever height it needs to be to maintain the correct proportion).HTH
Stephen Dixon
Editor, Animator, Motionographer
Museum Victoria -
Stephen Dixon
April 9, 2012 at 3:30 pm in reply to: how to make multiple screen captures from a video file to jpegs?see this thread: https://forums.creativecow.net/thread/291/774#777
-
from the ffmpeg online manual:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will output them in files named ‘foo-001.jpeg’, ‘foo-002.jpeg’, etc. Images will be rescaled to fit the new WxH values.If you want to extract just a limited number of frames, you can use the above command in combination with the -vframes or -t option, or in combination with -ss to start extracting from a certain point in time.
I’m not sure if it will work with numbers other than integers, but you could try using r -0.0001667 (I had a test with ffmbc and even -r 1 didn’t work so YMMV)
Or you could use the shell with a loop and a variable to just extract 1 frame at a given time. I’m not sure what environment you’re in, but with bash on linux/os x you’d do something like:
for i in {0..900};
f=$(( i * 240 )) # (240 because it's 60 seconds * 4)
do ffmpeg -i foo.avi -vframes 1 -ss $f -f image2 foo-$f.jpeg
done