Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Panasonic Cameras DV100 vs DNxHD vs CFHD

  • Jason J rodriguez

    April 9, 2005 at 6:16 pm

    Suffice to say, when trying to push the Varicam to it’s maximum dyanmic range, or when trying to shoot a very “flat” image for post-correction DI that has no clipped highlights (sort of shoot a digital negative, or the equivalent of a film inter-positive from the original color-negative, but in order to protect the highlights, you have to underexpose), the software codec, the way that Apple chooses to decode it, exhibits artifacting that I DO NOT SEE in the HD-SDI ingest of the same footage. It’s typically a nasty, large, pixel blocking artifact. I’ve seen it, my DP has seen it, when pointed out to others they see it, it’s plainly there, and it’s impossible to get rid of UNLESS you digitize through HD-SDI.

    I’m not saying that the hardware codec is better than the software codec, they’re the same codec. But there is a different decoder happening in the Apple sofware than in a Panasonic deck.

    Now here’s one interesting thing. Like DV, IF you apply no effects to your footage, you ingest via firewire, and then write back to the tape via firewire, you basically have a clean digital copy of the footage. Now you can take that same tape, and digitize it via HD-SDI (but it’s now an edited program), and have a ‘cleaner’ copy to do effects with. Not as convenitent IMHO, but it does work.

    But the point is, if you choose the DVCProHD software decoder from the start, that’s the problem. IMHO, the HD-SDI hardware decoder in the Pansonic decks makes a “cleaner” (not necessarily more accurate to the actual information on the tape) image than what I get off firewire.

    BTW, one other thing you missed.

    If you export DVCProHD to After Effects, render to a new “uncompressed” codec, and import back into Final Cut, and then want to go back to tape, you’re either going to have to render back to the DVCProHD codec (another generation loss), or you’re going to have to render out your whole timeline to some other codec (which can take some time).

    I think in the end my argument has been pretty self-explanitory, and it’s that HD-SDI looks better than what’s coming off firewire, plain and simple. I don’t know EXACTLY why, but there’s a reason that the top post houses in America and around the world aren’t ditching all their gear for firewire ingest if it was so much better-because it’s not, and it has some definite trade-offs to the higher-quality input you get over HD-SDI. The footage looks cleaner, there’s less artifacting, the noise looks more “natural” rather than blocky, less banding in the gradients, etc. I’m sure there’s some form of dithering happening to create this phenomena that is not occuring with the Apple software decode so that Apple’s codec remains more faithful to the “original” data on the tape. But that dithering, or whatever is happening inside that deck over HD-SDI is making the footage look much “better”, or “cleaner”, or less “digital” and more “natural” (in the sense of less digital artifacting) than what I see from the same footage off firewire.

    Jason Rodriguez
    Virginia Beach, VA

  • Luis Caffesse

    April 9, 2005 at 7:20 pm

    [Jason J Rodriguez] “But the point is, if you choose the DVCProHD software decoder from the start, that’s the problem. IMHO, the HD-SDI hardware decoder in the Pansonic decks makes a “cleaner” (not necessarily more accurate to the actual information on the tape) image than what I get off firewire.

    However, when it comes to P2, we are not capturing at all, we are bringing the clips in directly from our P2 cards.
    By doing that, are we using the same software decoder that you are saying is the problem, or are we side stepping the entire issue?

    If you export DVCProHD to After Effects, render to a new “uncompressed” codec, and import back into Final Cut, and then want to go back to tape, you’re either going to have to render back to the DVCProHD codec (another generation loss), or you’re going to have to render out your whole timeline to some other codec (which can take some time). “

    If we are bringing clips in directly from P2 cards, and we edit in FCP, pull the timeline into AE using automatic duck, and render out from there then that is only 1 generation loss. Why not render out of AE with the DVCProHD codec to lay back to tape? Or render out of AE with whatever codec you want to lay back to tape for that matter?

    Rendering out of AE is the only time in this workflow that I can see that the footage is being recompressed.
    If the point of using the CFHD codec as an intermediate codec is because it holds up better to multiple compressions, then I don’t think it’s really necessary if you have a proper workflow using direct to disk or solid state recording. Ideally your footage should only need to be recompressed once.

    This may be a good idea for tape originated material, but I still fail to see the advantage when shooting DTD or solid state.

    Luis Caffesse
    Studio 3 Productions, Inc.
    Austin, Texas

  • Graeme Nattress

    April 9, 2005 at 8:57 pm

    Luis, you’re spot on right there. Only one recompression at worst in that workflow, and seriously, even mulitple compression DVCproHD is much, much better looking that broadcast HDTV that nobody would ever know anyway.

    The DV revolution was based around a couple of things – cheap, high quality cameras that were a big leap over the previous analogue models in quality, a tape format / copdec that at least equalled the old analogue broadcast standards, and native editing. Of these, it was native editing that brought the computer costs for NLE down to an affordable level.

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Graeme Nattress

    April 9, 2005 at 9:05 pm

    “I’m not saying that the hardware codec is better than the software codec, they’re the same codec. But there is a different decoder happening in the Apple sofware than in a Panasonic deck. ”

    But you say that the hardware codec looks cleaner etc. etc. and hence it must be better, right, but above you’re saying that the software codec is as good as the hardware codec.

    I don’t have a lot of DVCproHD footage on my system – only what clients have sent me so that I can develop plugins that work with it, but I’ve not seen the problems you mention other than inside FCP. I don’t have a DVCproHD deck, and the whole idea of P2 is that I don’t need one. I view my footage out of the Decklink HD card to SD monitor, DLP projector and 23″ Cinema Display over SDI and HDLink and none of these methods of display show the artifacts that FCP does internally.

    As I say, I’ve had the very same discussions over the DV codec – hardware v software, and I managed to prove to myself quite clearly that any differences are utterly negligible as I mentioned in an above post. If you can send me similar DVCProHD over Firewire v DVCpro over HD SDI footage, I’d be very happy to take a look and see if I can see what you are seeing and therefore be able to put my mind to work in trying to solve any problems with the firewire / Apple codec route if there are any. I’ve got a feeling, that anything you’re seeing is probably due to scaling issues from 960 to 1280 or 1280 to 1920 that could be solved, in that FCP does pretty poor scaling and that the deck’s SDI output does nice scaling.

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Graeme Nattress

    April 9, 2005 at 9:40 pm

    I fully understand that electronic filtering is used for removal of interlace twitter by summing line pairs. However, I doubt that alias removal using a DSP is trivial, and cameras don’t usually have enough resolution for you to generate an aliassed image on the CCD, then remove it with electronic filters and still have enough resolution left over not to look blurred? Or do they? Surely the best way to remove aliassing is with the lens or a filter block ahead of the CCD, and not have to muck around with a DSP to do that?? I understand the mathematics of sampling theory, aliasing etc., but I’m lacking the detailed knowledge of the secrets of what goes on inside the camera to take this line of argument further. Anyone know exactly what’s going on for certain, or can have some luck with google and turn up some papers or something??

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Jason J rodriguez

    April 10, 2005 at 6:19 am

    the different decoders (software and hardware) have access to the same data information. That’s what I’m saying, not that the two are the same, cause obviously they’re not.

    for instance, I prefer the look of AVID’s DV codec to Apple’s. It’s the same RAW data being decoded, but the actual software algorithms used to render a visible image on the screen are not the same, and one produces a more pleasing image to eye. THAT’s what I’m saying.

    The hardware decoder in the panasonic deck looks better (and keys better, takes extreme color correction better, etc.) than the software decoder in Quicktime. The only way to get access to the hardware decoder is through HD-SDI. If you’re going to work with the HD-SDI signal, then good, visually lossless intermediate codecs are a very nice thing.

    And Louis, with P2, you are not side-stepping the issue, since you are going to be using Apple Quicktime’s software decoder to decode the RAW information on the P2 card.

    The DVCProHD encoder dumps data into a format, whether that’s a data stream on tape, or a MXF file on a P2 card. In order to see that footage on your computer screen, out your monitor, etc., it must be decoded. From the artifacting I’ve seen, I think that Apple’s implementation, whether it’s more “accurate” or not, doesn’t look that good, and is prone to banding, etc.

    For instance if you have the demo DVCProHD footage, try to alter the color in those sunset shots and watch all the banding an noise appear in the sky. Or why don’t you try to raise the blacks on the “Presido” shot to make it less contrasty. Or why don’t you try saturating some of those shots, or bumping contrast, etc. You’re going to discover banding and other digital artifacts that I personally don’t think look good.

    Again, it’s NOT the information that panasonic has encoded onto the P2 card that is at fault. It’s the way the file is being decoded.

    Decoders CAN get better. Just look at the DV codec quality jump from QT 4.0 and 5.0. And if the DV codec’s decoder (not encoder) was so great right now, why was an “improved” DV codec one of the top feature requests on the LAFCPUG’s Final Cut requests list?

    So again, I’m not pointing fault at DVCProHD itself, as in the RAW information held in the binary format after encoding. I’m pointing fault at the artifacts in Apple’s decoder that to me just don’t look good. For top-quality work out of the Varicam, I’d rather user the decoder present in the hardware decks of Panasonic, and digitize over HD-SDI, than use the software Quicktime DVCProHD decoder. The only downside with the HD-SDI route is that it requires big files and lots of bandwidth to move around, and nice, visually lossless intermediate codec would be nice thing to use with the HD-SDI route.

    Maybe Apple will make a better decoder or should I say “better looking” decoder, an then this argument will be moot.

  • Toke

    April 10, 2005 at 11:47 am

    With progressive picture there is of course a lot less problems with aliasing than with interlaced.
    But if ccd’s are not having the same resolution that is being recorded, there will be scaling that
    leads to aliasing if it’s not low-pass filtered. Same thing with pixel shift or 1-ccd de-bayer.

    I’m not a camera dsp engineer either, I only use these cameras and look what I get with analytic eye.
    But I also like to keep my technical understanding in a logical level to better use these tools.
    So I don’t exactly now how things happen, but I know why they happen and see the result.

    I still don’t think that is wise to leave low-pass filtering to lens with interchangeable lens cameras,
    because then you might get into trouble if you happen to use “too good” lens.

  • Graeme Nattress

    April 10, 2005 at 1:43 pm

    Ah but, the Avid DV codec doesn’t stand up as well to generational loss as the Apple one! The difference is that it smooths the chroma so that it looks better on screen, but this will “bleed” over generations and the colour will leak out. The luma is also more filtered (to produce less mosquito artifacts by not letting as high a resolution through) so with the Apple codec, you must pre-filter any graphics or high resolution elements that you add yourself.

    When you decode DVCproHD over SDI, you’re not just decompressing the video, but you’re also scaling it up from 960×720 to 1280×720, and the chroma, which is 4:2:2 at the compressed resolution get’s scaled up to, to be 4:2:2 at the uncompressed resolution. As we know, the scaling in FCP / Quicktime is rather poor, so if you’re viewing the footage at a proper 16:9 stretched out resolution, you’re seeing Apple scaling artifacts. But when you render in FCP, it uncompresses it to 960×720 without scaling, the effects are done and it’s recompressed, all without scaling / unscaling. Ofcourse, this means that if you do effects in FCP, then no scaling occurs until playback, and all is ok. I think this is also why people hate AIC so much, as when you convert AIC to uncompressed you see all these scaling artifacts.

    Now, if I was compositing in Shake, I’d set it so that all the compositing goes on with the unscaled footage, but with the viewer so that I can see it all at it’s proper aspect ratio. Can you do this in AE, or is it square pixels only for everything??

    Graeme

    http://www.nattress.com – Film Effects for FCP

  • Toke

    April 10, 2005 at 2:22 pm

    The difference is that consumer watching hdtv does not (hopefully) do any color correction.
    And when I do something imprtant that can have long lifespan, it’s good to know that it will
    look good also in the next generation distribution formats.

    If you want to see the image, it has to be decoded no matter if it comes from tape or p2.

    Simple example with dvcproHD workflow:
    1) get material (from tape or p2, doesn’t matter) to hd.
    2) edit with FCP (lets say (cross)fades somewhere)
    3) color correction or compositing or other FX in AE
    4) adding titles to final edit in FCP
    5) master to dvcproHD

    Alt1:
    dvc codec all the way: decoding in steps 2,3,4 and encoding in steps 2,3,4.
    Master quality is decreased.

    Alt2:
    uncompressed DI: decoding in step 2 and encoding in step 4.
    Uses lots of disk space. Master quality is highest.

    Alt3:
    visually losless DI: decoding in step 2 and encoding in step 4.
    Uses less disk space than Alt2 but master quality seems the same than Alt2.

  • Graeme Nattress

    April 10, 2005 at 3:28 pm

    What about edit in FCP, send over to AE and do dissolves CC and titles, and then output back through FCP. Only one DVCproHD generation.

    Graeme

    http://www.nattress.com – Film Effects for FCP

Page 6 of 8

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy