Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Panasonic Cameras DV100 vs DNxHD vs CFHD

  • Toke

    April 10, 2005 at 3:33 pm

    Try to edit let’s say one hour tv program in AE with about 500 cuts, fades and dissolves.
    There is a reason why composition and editing softwares are separate.
    Many times with tv programs you get titles after the primary edit has been done.
    Eg. titles are made while program is in cc.

  • Graeme Nattress

    April 10, 2005 at 3:47 pm

    Output out of AE as uncompressed then, or if that takes up too much space, use PhotoJPEG75%. No need to pay money for an intermediate codec. Does not automatic duck translate dissolves across from FCP to AE? I think the bigger issue is Jason’s point about the DVCproHD codec implementation by Apple. I don’t have enough DVCproHD footage or any transferred over SDI to be able to see exactly what he’s getting at, but it’s bound to become important at some point. I think I’d better get to work with some magic de-artifacting and de-banding algorithms….. And also the code I’m working on to improve the conversion of 8bit to 10bit or more video. As people in Yorkshire say, “where there’s muck, there’s brass”.

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Toke

    April 10, 2005 at 4:00 pm

    PhotoJPG is a good alternative to DI codec also.
    Some are just better.

    It’s a bit funny/sad (readers choise) that we are discussing here about decoding quality,
    when the problem really is in the encoding side (in camera).

    Just 2(two) bits more color info and there would be no worries with dv codec.
    But no, not 10bit dv codec, even after a decade…

    Maybe quality isn’t in fashion.
    Hell, they wouldn’t go to the moon today, if it wouldn’t be enough economic or profitable…

  • Luis Caffesse

    April 10, 2005 at 4:08 pm

    [Graeme Nattress] “Does not automatic duck translate dissolves across from FCP to AE?”

    It sure does.
    So, in the end, if you plan your workflow carefully there is no reason why you should have to render
    out your footage more than once.

    Edit in FCP, use Automatic Duck to import the timeline to AE, finish in AE (color correction, titles),
    and only then render out to whatever you need.

    Luis Caffesse
    Studio 3 Productions, Inc.
    Austin, Texas

  • Graeme Nattress

    April 10, 2005 at 4:08 pm

    With perceptual encoding, I’d got straight to 16bit RGB no chroma resolution reduction. With the perceptual coding, you’d only use more than 8bits when it’s needed, up to the max of 16, as a lot of the >8bit data would just be noise. Fully raw is nice, but you need to correct it before playback or it just looks out of whack. I see something inbetween – lightly compressed R’G’B’ 16bit video.

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Jason J rodriguez

    April 10, 2005 at 6:53 pm

    [Graeme Nattress] Ofcourse, this means that if you do effects in FCP, then no scaling occurs until playback, and all is ok. I think this is also why people hate AIC so much, as when you convert AIC to uncompressed you see all these scaling artifacts.

    Now, if I was compositing in Shake, I’d set it so that all the compositing goes on with the unscaled footage, but with the viewer so that I can see it all at it’s proper aspect ratio. Can you do this in AE, or is it square pixels only for everything??

    BINGO!

    Actually, once you convert the footage to working in RGB, you can’t undo the scaling, you are stuck with the poor decoding of the Apple Quicktime DVCProHD decoder, and you have hit precisely on my argument. This happens in Shake, After Effects, Quicktime player, Combustion, etc. Anything outside of FCP that is not YUV native, or that can process DVCProHD natively, is going to resort to the RGB scaler of the Quicktime codec with all of it’s faults. And then when you render to an uncompressed codec out of those applications, you have done irreversible damage to the footage, in that now the scaling and color artifacts are permanently “flattened” into the image-you can’t get rid of them, and they end up on whatever tape format or whatever you go back to for delivery or mastering.

    Again, go look at Marco’s page on DVCProHD at onerivermedia.com. It doesn’t look that pretty, and that’s what you get in Shake, After Effecs, Combustion, or any other non-FCP application that uses the Quicktime DVCProHD-RGB decoder and can’t natively process DVCProHD.

    BTW, since Shake is not DVCProHD native, even if you import the footage and scale it down to 960×720, and then re-scale it, it’s too late, because again, you’re now going through the Quicktime RGB decoder engine, and are basically taking the RAW 960×720, scaling it out to 1280×720 (on import into Shake), and then squshing the poorly scaled material back down to 960×720, and then adding a viewer control to “preview” back at 1280×720. Basically you’re not helping anything. Once you’re out of FCP-land and a native DVCProHD YUV timeline, you’re screwed because these other programs only know how to deal with RGB or floating point RGB data (in the case of Shake), and so any compressed footage that is imported is decompressed to full-raster 4:4:4 RGB. They do not do any of their operations in the native DVCProHD pre-filtered YUV color space. I know this for a fact (Shake included).

    Jason Rodriguez
    Virginia Beach, VA

  • Jason J rodriguez

    April 10, 2005 at 7:00 pm

    All these cameras have a light low-pass filter in front of the CCD’s, even the 100K Cinealta, in order to prevent aliasing. They then use detail circuitry to overcome the “blurring” from the filter, although the filter is very light (in comparsion to a bayer camera), only enough to prevent horrible aliasing beyond Nyquist. Depending on the manufacturer, they’ll allow more aliasing than others to get a “sharper” image or higher perceived MTF, while others will filter the image further in DSP to prevent aliasing (I believe that Sony does allow some aliasing to occur, while Panasonic tries to filter as much as possible).

    Jason Rodriguez
    Virginia Beach, VA

  • Toke

    April 10, 2005 at 9:48 pm

    Well, that sounds groundbrakingly innovative!
    Dynamic color depth!
    So every frame is checked out and only the real depth above the noise is saved.
    Is there any codecs already using this?

    So in the future there might be “auto color depth” button 🙂
    This would also lead to more efficient compression when you are shooting
    in very dark places there could also be less than 8 bits color depth…

  • Graeme Nattress

    April 11, 2005 at 1:53 am

    I have an idea that the Digital Betacam codec might work something like this for bit depth, but I dont’ really have any evidence for that, but I may have read about such an idea in conjunction witht that format at some point.

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Graeme Nattress

    April 11, 2005 at 2:40 pm

    Thanks. I’m going to do some more investigation and see what I come up with….

    Graeme

    http://www.nattress.com – Film Effects for FCP

Page 7 of 8

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy