Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Adobe After Effects lets discuss deinterlacing

  • lets discuss deinterlacing

    Posted by Tore Gresdal on October 13, 2007 at 8:30 am

    Hi Everyone!

    from searching the forum I am certainly not the only one to ask about deinterlacing, but there are a few things I would like to clear up or understand better. There are various tutorials here on the Cow on deinterlacing and they are all very good, but there is still some things that I wonder about.
    Let’s stick to SD PAL, 25fps, 720×576, lower fields first, to keep things simple. I am pretty sure NTSC is lower fields first too, right?

    Ok, here we go:
    1. Setting AE to interpret footage as lower fields first, whacking that into a composition and sending it straight to the render queue with output settings set to progressive (no fields), do I loose any resolution?

    I assume the answer is Yes, otherwise plugins like RE:Fieldskit and Magic bullet deinterlacer wouldn’t exist right?

    2. When it comes to Fieldskit and MB, why do they need to analyze the footage so extensively and do vector analysis and so on… Putting 50 half images into 25 full images should be pretty straightforward right?

    3. This one sort of belongs in Premiere Pro forum, but since it’s on the topic I try anyways: Deinterlacing inside Premiere Pro by rightclicking the footage and selecting deinterlace…. I am getting the impression that it does more damage than good. Images flicker, stutter and loose resolution… Am I dillutional or should I use dynamic link and let AE handle it?

    Same goes for exporting still images from the Premiere Pro timeline, is it better to let AE handle that sort of thing? I usually tick the deinterlace box in the export settings and get decent results from that, but I thought I would check if someone knows better…

    Ah, btw.. on final thing… Going from HDV (PAL) which is Upper fields first to DV (PAL) which is lower fields first, does anyone have a clever procedure on that? My images become slightly soft if I export to DV PAL from a HDV timeline, and have now resorted to export in HDV to tape, then set the HDV deck to do a hardware downconvert to SD DV and then capture that. It gives me terrific resolution and no problems with fields… but it takes a bit of time.

    Thanks for your time fellas!

    Regards
    Tore

    Erik Pontius replied 18 years, 7 months ago 4 Members · 7 Replies
  • 7 Replies
  • Brendan Coots

    October 13, 2007 at 4:19 pm

    I don’t use Premiere so I can only answer some of your questions:

    Setting AE to interpret footage as lower fields first, whacking that into a composition and sending it straight to the render queue with output settings set to progressive (no fields), do I loose any resolution?

    By doing this you are allowing AE to do the deinterlacing. Not the end of the world, but there are better tools for that task – as you mentioned FieldsKit/Magic Bullet.

    When it comes to Fieldskit and MB, why do they need to analyze the footage so extensively and do vector analysis and so on… Putting 50 half images into 25 full images should be pretty straightforward right?

    As you probably know, fields are displayed intermittently – upper field then lower field then upper field and so on. If you simply merged two consecutive fields to make one whole frame, you would be removing half of your footage, causing it to be half as long and play in fast motion. Therefore, the main function of deinterlacing tools is to create the missing data from nothing other than guesses and math, which isn’t exactly easy if you want very high quality results. Looking at it that way, you can see how varied the approach and results might be, depending on the software used. After all, you are asking a piece of software to create footage for you.

    As for your questions about Premiere and deinterlacing, generally speaking semi-pro apps are not the best place to be doing something that can vary so widely in quality depending on the tool used. If you are frequently battling interlacing issues, you are the ideal FieldsKit/Magic Bullet customer, and it would probably be worth every penny to you. Download a demo and see if it saves you time/money – if so, the decision is a no-brainer.

  • Tore Gresdal

    October 14, 2007 at 12:00 pm

    Thanks for your informative answer.

    I guess I should have elaborated my question a bit further. I understand the basic concepts of interlacing… but it feels like I am missing that last 10% to finally GET IT and be confident about it…

    Let me try again: (still SD, PAL, 50i)

    There is 50 half res frames in one second. 25 of them are lower field, 25 of them are upper field. Why not display the two of them simultaneosly.
    So grab 1 frame with lower fields and one frame with upper fields, put them on the screen at the same time and you have 1 full frame in full resolution.
    Do that will all the 50 half frames and you have 25 full resolution frames with no speed problem. It’s neither sped up of slowed down… Where is the need to do vector analysis and all the other fancy stuff? Why can’t the deinterlacer grab the missing lines from the next half frame instead of making it up?

    Now, this must be wrong, or fieldskit and magic bullet wouldn’t exist… but what is it that I am not getting? I don’t get the big picture… litterally.

  • Erik Pontius

    October 14, 2007 at 3:12 pm

    You’ll have a big mess on your hands doing it this way.
    Imagine if you have interlaced footage a black ball on a white background. That ball moves from the left edge of the screen to the right edge of the screen. Deinterlacing in the method that you describe would result in a frame of video that contains a mix of images that are 1/25th of a second off, creating a jagged edge on the ball since one half of the lines of the ball have moved forward.
    Another example would be in the case where you have two scenes, a bright daylight scene and a dark scene and a cut between them. When deinterlacing in the method you are describing, the cut between the bright scene and the dark scene will contain half of the lines of the bright scene and half of the lines of the dark scene.
    These are the kinds of things that proper deinterlacing software tries to prevent. Analyzing the footage to determine how best to generate the missing field.

    Erik

  • Tore Gresdal

    October 14, 2007 at 3:59 pm

    And there we have it! Thank you.

    What I didn’t think of was the little time difference between the lower and upper fields. A crucial little piece of information that I completely forgot about.

    Hehe, I feel kind of stupid now… but hey… it’s when we let go of our pride and admit our shortcomings we really learn something.

    Hm… now it makes more sense what the salesperson said about the Panasonic AG-DVX100A… “it’s true progressive because it records a full frame and then splits it into two interlaced frames”

    BTW: Regarding interlacing… why didn’t we get rid of it when we had the chance in the transition to HD? It was initially invented because of slow phosphorus in the Cathode Ray Tubes, and that is certainly not the problem anymore.
    Is it simply a question of bandwidth, or is it something else? I’ve noticed that there is hardly a difference on my progressive and interlaced file sizes so I feel tempted to question the bandwidth theory…

    And while on the HD transition thing… why was the NTSC framerate retained… why not transition it to the PAL framerate of 25fps (50i) which seems much easier to deal with, at least to my inexperienced mind… PS! My apologies if I opened a can or worms here. I am asking out of pure curiosity and nothing else.

  • Darby Edelen

    October 14, 2007 at 4:07 pm

    [Tore Gresdal] “There is 50 half res frames in one second. 25 of them are lower field, 25 of them are upper field. Why not display the two of them simultaneosly.”

    An interlaced display device does not display all the fields simultaneously, that’s just the nature of an interlaced display.

    Your question is kind of like asking, “why not fly to the moon in a shopping cart?” that’s just not the way shopping carts work.

    The other important thing to realize is that if the footage was shot at 50i then there aren’t 25 frames per second, there are 50 unique fields per second. That is to say that the upper field and the lower field of a frame are not from the same moment in time, they are separated by 1/50th of a second.

    If your source was shot progressively (film for example) then telecined to tape, it should be PsF (progressive split frame) meaning that both fields in a frame are from the same moment in time (the same frame of film) and it would possibly have a pulldown (which is a whole new can of worms that I invite you to look into). However, an interlaced display is still not displaying the 2 fields at the same time, that’s just not the way that they work.

    Darby Edelen
    DVD Menu Artist
    Left Coast Digital
    Aptos, CA

  • Tore Gresdal

    October 14, 2007 at 4:44 pm

    Your question is kind of like asking, “why not fly to the moon in a shopping cart?” that’s just not the way shopping carts work.

    Hey… you would be amazed at what can be turned into a space shuttle…

    Top Gear Rocket shuttle Car
    https://www.youtube.com/watch?v=Xk5M6J2zMWQ

  • Erik Pontius

    October 15, 2007 at 3:02 pm

    Bandwidth is the biggest hurdle and the current infrastructure for transmission (satellite, off air, cable) can’t handle it. 1080p is a huge amount of data. This will of course change with time as technology will improve.
    The fractional frame rates are kept in the ATSC HD mainly as a transitional aid (59.94, 29.97, 23.98, etc…) for upconversions and the like. There are also integer frame rates in ATSC (60fps, 30, 24).

    Erik

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy