Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Adobe After Effects Interpreting Footage, Movie Delayed

  • Kevin Camp

    June 24, 2009 at 2:46 pm

    temporal compression can be in 2 forms, p-frame or b-frame compression, and is where a current frame uses data from either previous frames (p-frame) or both previous and forward frames (b-frame). this compression can achieve much better compression rates and is common in many codecs including mpeg-2, mpeg-4, h.264 and hdv.

    ae doesn’t work well with this type of compression. when working with it in a comp, it forces ae to composite a given frame from other frames, which slow it down and can cause errors. when rendering to a codec that uses that compression, it can cause issues too. you’re asking ae to render a frame, then compare it to other frames, eliminate data that doesn’t change to write the final frame. this type of compression should be handled post render, where the frames can be analyzed and optimized in a multipass compression method.

    [Dave Slipp] “About the “save ram” preview, does it show a lot of settings, a Make-Movie-like screen? Do I choose the codec, and all other stuff?”

    actually, no, i think it automatically chooses apple’s lossless animation codec and i don’t think you can change that (i think older versions of ae used to have a setting for preview format in the preferences, but it’s been too many years).

    Kevin Camp
    Senior Designer
    KCPQ, KMYQ & KRCW

  • Ht Davis

    March 20, 2015 at 9:22 pm

    Hey guys… Mind if I cut in on this dance?

    VFR is a relic of the old days of film; specifically slideshow style films that were run on a manual projector (the human factor made the film run at a Variable Frame Rate).
    Today, you find it popping up in HD video shot with optical image stabilization, as this actually drops frames in order to retain stability. It is also found in some effects where the speed is increased or decreased improperly (after the main render), or in some cartoons\anime where the effect is meant to apply to action or motion scenes and amplify the “Feeling”.
    In broadcast television, it hasn’t been an issue for the broadcaster because most cameras are on stabilizers and tripods and image stabilization has been moving inside interchangeable lenses (thanks to lessons learned from the m4 sniper rifle with a floating barrel and scope). However, OIS is still employed by most current cameras, and the frame rates have often been variable, even with professional cameras. Working with most NLE’s, this hasn’t been an issue, since most play the video based on it’s actual time length in milliseconds (and they allow editing in that mode), and then blend frames when they need to in order to conform output (or in some cases, it just output variable frame rate as necessary). With Adobe, the emphasis is on the output matching a more professional standard, and the sequence is what is played (the video is “Conformed” or played frame by frame, and it’s length is in frames, not milliseconds). This allows effects to be applied directly to frames as if they were simple jpegs, and this is the “old way” with the “Frame by Frame” way of working with the video.
    For Mr. Broadcast TV, VFR and VBR are different. Variable Bit rate is not a problem, it is a standard of data transfer. It means that COMPRESSED video (lossy codec, loses some quality in order to keep data small) is encoded by dropping data from the images\frames (keeping changes from the last frame and some from the next, with jpeg compression) in order to fit within a certain transfer rate range, which allows the encoder to adjust the compression (data drop) according to the scene, maintaining similar QUALITY but drastically decreasing the amount of data needed to store the file, and the time to pass the file to the destination (like with youtube; conform to 10mbps or less, and it will play awesome, but go higher, and many people won’t be able to play it effectively) when the file has more areas that are less active then others (more data can be dropped and the frames can easily be rebuilt on the fly). Try not to confuse the two measures. One is about data transfer, the other is actual pictures. They are related, but not the same; think dogs and wolves.
    If you’re having problems with the frame rate, the only way to really fix that is to interpret the footage (which will adjust the actual milliseconds), or REFRAME, replacing missing frames with blended ones. If you only have a frame or two to replace, then interpreting the footage isn’t going to hurt much, but it could still mess with your audio. So… …First, process the audio into a file all it’s own, so you have the original, just in case. Then adjust the speed of your audio as necessary in the sequence. If it doesn’t work well, you can use something like twixtor or twixtor pro to re-interpret the whole video back to its normal time as an effect, and leave the audio alone (it will guess the dropped frames for you). The other way is to interpret with After effects inside a comp, and output from there or simply place in AME and use Frame Blending to decompress to an intermediary file (like pro-resLT or similar AVC), as this will guess the frames for you.
    If your footage is really bad, which can happen with some prosumer cameras being handheld while you get bumped around in a crowd, you can try an effect… …I usually just blur that part of the video in a comp in AE, and let twixtor then blend that, which results in some heavier motion blur (as in extreme blur), but leaves the scene still discernible and acts more like an effect. IF it happens at the start of a transition or at a cut, you can use transitions to make it even less destructive visually.
    I’ve had this type of problem in the more horrid sense; i.e. the family member (hobbyist or complete novice) who tries to shoot the wedding, and comes to me to edit and process the video. I’ve used AME in most cases, with frame blending, and I just cut out the most horrid areas at the beginning and end of any clips, or blur it, run to twixtor and process it, then pass in a transition to blend to the next clip (edit mark). When it falls right in the middle of an important thing, I use edit marks to do that and add a transition between to crossfade\blur the view and keep it discernible, without letting it hurt the composition of the video. This is usually acceptable to those clients, and some even find it a pleasurable addition. Theres heavy work in it, and many effect layers, but with the right plugs or even the right combination of blur\transition, you can get around some VFR artifacts.
    I hope this is helpful.

  • Ht Davis

    August 3, 2017 at 3:23 am

    Many people have been coming to me from here and other sites about VFR. They complain that it’s so common and blah blah.
    I output to files that will play on old players, new players (disc\settop), broadcast, and digital, all at once. If you only have one output target, get Vegas or another cheap title. They rip up audio during editing, and when you finish, you can up to youtube just fine. Youtube will reprocess the video, and fix the VFR their way. It’s quality varies and i’ve heard a rumor or two that they are planning to remove it from the free services.

    Some have actually mixed up VFR with another problem, mixing frame rates in sequencing and comps. You can mix frame rates all you want, but your output will conform to your sequence\comp settings. Here are some things to note:

    If you upconvert your frame rate from 24p, you’ll have to create new frame information for the missing frames, or you can speed up your video. When you create the frames, you want them to be transitional, or they might introduce jitter motion. Using frame blends when you have more motion or faster motion helps. When you have light or little motion, frame sampling works fine, as there isn’t enough of a difference in the motion to cause jitter. You can apply different methods to different sections by using cuts in premiere, or clipping up your video in AE and setting different methods in different comps so the method matches the video motion.

    When downconverting, it’s much easier. You can INTERPRET FOOTAGE in Premiere or After Effects so that your footage will use a frame-drop pattern to fit into the same area. Another way is to slow everything down. If you are at 30, you want to come down to 24, you’ll need to come down to 80% speed. IF you are at 60, you’ll come down to 40% to get to 24, which will put your video in slow motion. If you have 60 and 24, you’re best off using 30 as a medium, and then setting both up with blends.

    The long way:
    This is a long method that will actually perform a blend of several frames, weighted for time, when converting down. First, put your footage, as is, into your comp\sequence at your chosen rate, and then adjust the speed of your clip in the sequence. Going down, you’ll slow down, so the speed is 80% 50% or 40% depending on the drop (30-24, 60-30, 60-24 respective). Once you’ve done that, nest the sequence\comp in another comp, and speed it up. It will be 120, 200, or 225%. When rendering out, set interpolation to FRAME BLENDING in the export dialog. This will blend those frames that are in the same frame time, weighted to the one that is in that section the longest, and give you a transitional frame.

    The main rule:
    You are usually better off with the long way, and converting down rather than up for frame rate.

Page 2 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy