Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums DaVinci Resolve Native H.264 workflow in Resolve

  • Native H.264 workflow in Resolve

    Posted by Paul Campbell on November 24, 2016 at 4:39 pm

    Happy Thanksgiving! I was just reading an old article from 2010 about editing native H.264 clips (in FCPX and Premiere, not Resolve obviously). The author suggests “decompressing the H.264 footage into the higher quality ProRes 422 or 444 before editing”, as it makes your workflow much easier.

    My question is the “decompressing” of H.264 footage. Sure, you can transcode H.264 into just about anything, but are you really “decompressing” it?? If the DSLR is recording footage as H.264, aren’t you squished into a corner already? Isn’t this like trying to “decompress” an mp3 into a wav?

    So, ultimately I’m wondering if Resolve would be happier with ProRes stuff that’s been transcoded from H.264. The work I’ve been doing with H.264 in Resolve so far seems to be moving along nicely, so is ProRes really necessary? Is transcoding to ProRes going to make my product look better?

    Thanks.

    Sunderland Green replied 8 years, 4 months ago 7 Members · 28 Replies
  • 28 Replies
  • Joseph Owens

    November 24, 2016 at 11:17 pm

    [Paul Campbell] “Is transcoding to ProRes going to make my product look better?”

    Not really. Unless you do some chroma-sampling tricks and macro-block smoothing. The whole notion behind recoding H.264 to a codec like ProRes is to make the CPU usage more efficient. H.264 is a Long-GOP format, which means that the system needs to make whole “intra” frames on-the-fly out of the i-b-p sequence. Intermediate density codecs like ProRes and DnX are natively “intra” which means every frame is a whole frame and not a bi-directional or predictive “difference” entity. The faster you skip around in a Resolve timeline, the harder the CPU has to work in order to keep the video smooth.

    Eventually you start getting gaps and skips, black holes and decoding errors when the processing overloads the system’s ability to keep up the bitrate and – above all – do the translation into integer-float so that the application can do its work.

    jPo

    “I always pass on free advice — its never of any use to me” Oscar Wilde.

  • Toby Tomkins

    November 25, 2016 at 12:11 am

    The best conversion is 5D2RGB. It does some nice chroma filtering.

  • Paul Campbell

    November 25, 2016 at 2:49 pm

    Quite a lot to take in here! Thanks very much.

  • Paul Campbell

    November 25, 2016 at 2:49 pm

    Gracias, Toby!

  • Hector Berrebi

    November 29, 2016 at 4:44 pm

    Adding to what Joseph said.. and in response to your question about the usage of the word ‘decompressing’

    Decompressing is what the h264 codec does. a video compression, or codec is a math function eliminating redundant data – compressing ( the co part of codec) and then when played back the codec’s math decompresses that data frame by frame (the dec part of codec). This is in part why high compression codecs are so hardware intensive (lots of calculations)

    so.. to answer your question, no. by transcoding to PR or DNX you’re rather re-compressing than decompressing.

    However, since you are taking a highly compressed Long GOP chain of “bi-directional or predictive ‘difference’ entities” and turning them into actual “whole frames“, one could see this as a sort of figurative decompression.. even more so if you transcode the long GOP h264 to some uncompressed format. figurative, not literal ☺

    and there are fairly many h264 variants that are not long GOP but I-frame where re-compressing would maybe serve a different purpose that ‘filling in’ whole frames

    recorded video quality would stay the same in any case…

    It became so simple and abundant these days to shoot using a good video compression, that the real issue in my opinion, is why shooters and directors of small/medium projects keep using bad codecs, and how little they know about these things and the (lost forever) data sacrifice they take from their oh-so-important footage.

    hector

  • Paul Campbell

    November 29, 2016 at 7:51 pm

    Hector, I’ve read so many articles about codecs, and am embarrassed to admit that the subject is still so sketchy to me. When you say “shooters and directors of small/medium projects keep using bad codecs”, I don’t think I understand what my options would be. I shoot video with a Canon T5i, which I know compresses the footage onto the SD card using the H264 codec. When it comes time to edit, I simply dump the card’s footage straight to my Resolve timeline and start cutting away. Are you suggesting that I’d be better off with a different workflow? Or perhaps what I’m doing is about as good as it’s going to get?

    Thanks for bearing with me guys, I’m trying to keep up with you.

  • Hector Berrebi

    November 30, 2016 at 10:53 am

    Hi Paul

    The full answer would be too long for me to write here..

    even the short answer is kind of long..
    you’ll excuse me for terribly simplifying and omitting details. ☺

    yes, its probably the best you’d ever get out of that camera and its codec.
    However, I did see all types of flat, Log-ish settings used on Canon cameras (https://vimeo.com/7256322 video that sort of started that wave) that increased dynamic range and detail levels when well exposed. that in turn would give you more to work with later when grading and will improve visual quality.
    Others use firmware hacks like Magic Lantern, forcing the camera to write RAW image sequences far grater in quality than what it was engineered for. Not sure it works on your model, but even if it does, I wouldn’t trust this workflow on any payed production or shots longer than a minute or so. I also do believe it kills your camera faster by pushing its limited hardware beyond its abilities..

    My point was about the choice to shoot on a specific format.

    video formats can be roughly divided into 3 groups the 4:4:4 the 4:2:2 and the 4:2:0 (based on something called Chroma Sub Sampling)
    *of course there are other factors like color depth (8,10,12 bits), type of compression, and RAW-ability but these are roughly parallel to the Chroma Sub Sampling groups in a way that a 4:2:0 camera will usually be also 8 bit, more compressed and no RAW recording, while a 4:4:4 capable camera will generally be 10-12 bits, with better compression and in many cases a RAW option.

    traditionally they are used by different industries, for many different purposes
    4:2:0 dominates consumer/procumer cameras and even sips in to some pro models. it is also how all web video from smartphones to youtube and everything in between is (and quite a few other delivery formats)
    4:2:2 is the TV and Broadcast choice as well as many post workflows related to that industry (PR and DNX for example)
    4:4:4 is the highest group meant for high-end workflows, digital cinema, acquisition of commercials, series and such.

    the past 5-7 years mixed everything up a little and made it much easier to use cameras capable of 4:4:4 acquisition.
    From external recorders to the Black Magic cameras or the abundance of RED/ALEXA cameras for (cheap-ish) rent, it would seem completely normal for me that productions, even small/medium ones would choose this option, ensuring maximum data quality at acquisition, and allowing better, more robust post workflows (including color work).
    Yet, 4:2:0 cameras with their inferior codecs and limited color depths seem to thrive, and so does their usage by many talented shooters and directors who are very often unaware of video chemistry and its tolls.

    Just 10 years ago, it was so expensive and difficult to work in a full 10 bit 4:4:4 workflow that only big capable productions could afford it. today it seems no production is too small (excluding family type private events) to at least consider it.

    And since as color people, we meet these productions at the end of the pipeline… We’re also the ones hearing them whine about the final result’s look in comparison to expectations or to the last GoT episode they saw ????

    hector

    Some contents or functionalities here are not available due to your cookie preferences!

    This happens because the functionality/content marked as “Vimeo framework” uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.

  • Paul Campbell

    November 30, 2016 at 1:27 pm

    Ok, that might’ve been the best short-long, terribly simple and detail-omitting reply ever ????

    Clearly, I’m a 4:2:0 guy with aspirations to continue growing in a field that has me so obsessed lately. This has been a very productive thread for me, and I really thank all of you guys for planting just enough seeds to keep pushing me forward. Now it’s time to wiki and then hit you up again later. Stay cool,

    Paul

  • Marc Wielage

    December 3, 2016 at 5:17 am

    I think the best answer is not to shoot on little cameras like DSLRs for long projects. I don’t have a problem with people who have limited resources and want to do an internet short or a student project on a camera like this, but for entire features, you really need to go to a camera that can handle 444 10-bit. Whether HD or 4K or beyond is another issue. But the devices shooting H.264 8-bit give you far more limited options in post.

    I’ve started to call 8-bit “Hate-Bit” lately, because it creates a lot of problems in color. And there’s a ton of that material that people routinely use for stock footage.

  • Paul Campbell

    December 3, 2016 at 3:05 pm

    Got it, thanks Marc. Fortunately for me, my projects certainly fall within the scope of “internet short” and “student project”.

Page 1 of 3

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy