Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Adobe Premiere Pro DLSR: edit native or ProRes. With intent to export ProRes as final, digital master on Export?

  • Matt Campbell

    April 16, 2013 at 2:19 pm

    Thanks Tim. But those were my old specs. I’m currently running:

    OSX 10.7.5 with a 3.39 Ghz Intel Core i7 on a built up Hackintosh
    16 GB of RAM with OSX on SSD, (2) internal HDDs RAID’d 1 for project files and External RAID 5 for all project assets (media, GFX, stills, etc.)

    Thanks for your comments. With my small interview type projects that are quick, I plan to go native throughout and export my archival master to ProRes.

    For more critical work, I’ll build in load time to jobs and compare native to my tried and true FCP method of transcode up front. ProRes has been flawless for me and other than that increased file sizes, its great.

    Looking forward to putting PPro through the ringer. So far so good and glad I went this route vs. AVID. Which is great too, don’t get me wrong, but for the speedy workflow and simplistic resemblance to FCP, I’ll take Premiere.

    OS 10.7.5, Hackintosh 1x 3.66 ghz 8-core intel xenon, 16 gb ram

  • Ivan Myles

    April 16, 2013 at 6:16 pm

    The concern is not with sampling so much as color space. YCC allows luma and chroma values outside the RGB boundaries. Upon transcoding to DNxHD any values beyond 0-100 IRE will be remapped to the RGB limits. The result is a loss of detail that cannot be recovered during color correction. It is worth checking whether ProRes exhibits the same behavior.

    Here is a sample H.264 source file with a very bright sky. The photo shows the mid-right portion of the shot. Note how the Y-curve extends beyond 100 IRE.

    When transcoding to DNxHD, PNG, JPEG2k, “None” or other non-YUV codecs, luma and chroma values are truncated at the RGB limits.

    The details are lost and cannot be recovered. Here is the transcoded DNxHD clip with output levels set to 16-235 using the Fast Color Corrector.

    By comparison, Here is a 422 YUV intermediate file with output levels set to 16-235. Note the Y and RGB values between 90-100.

  • Kevin Duffey

    May 2, 2013 at 4:18 pm

    Interesting about the loss of detail when transcoding. Seems wrong that transcoding a lesser (in my opinion) format like h264 would lose detail when going to an intermediate editing format like DNxHD.

    So two questions. First, how do you preserve this if you still want to transcode (and adding to that.. if you did edit the source, but then export to DNxHD at the end, I assume you’ll still lose the detail so how do you preserve it there)?

    Second, cameras like BM and my Shuttle 2 record in DNxHD directly. Do they lose the detail from the start, or is it only the transcoding process that does this? Again I can’t imagine why the format would continue to be used and popular if it’s causing the loss of detail.

  • Ivan Myles

    May 2, 2013 at 7:36 pm

    [Kevin Duffey] “First, how do you preserve this if you still want to transcode (and adding to that.. if you did edit the source, but then export to DNxHD at the end, I assume you’ll still lose the detail so how do you preserve it there)?”

    Source footage should be checked using the YC and RGB Parade scopes. Note the vertical bars on the right side of each chart in the images above. These bars show the full range of values in each frame. If the range extends beyond 0-100 IRE, color correction effects such as Fast Color Corrector or Luma Corrector can be used to bring the clips within 0-100 IRE. Even if the Y-channel is within range, an individual RGB channel might be out of range. Further reduce luma until every RGB channel lies within 0-100 IRE.

    [Kevin Duffey] “Second, cameras like BM and my Shuttle 2 record in DNxHD directly. Do they lose the detail from the start, or is it only the transcoding process that does this? Again I can’t imagine why the format would continue to be used and popular if it’s causing the loss of detail.”

    The problem relates to how the software interprets the color space information in the source footage. In general, 8-bit Y’CbCr color using 220 increments (i.e. 16-235) should be displayed as 0-100 IRE. Sometimes YCC footage appears in Premiere Pro as -7.5 to 109 IRE. I’m not sure if this footage was actually recorded using 256 increments and then incorrectly overblown by Premiere Pro, or if there is a different root cause. Either way, it’s important to use the scopes and check for issues prior to encoding or transcoding.

    [Kevin Duffey] “Seems wrong that transcoding a lesser (in my opinion) format like h264 would lose detail when going to an intermediate editing format like DNxHD.”

    The problem is based on color range (16-235 vs 0-255); the primary difference between DNxHD and typical H.264 implementations is compression. An all I-frame H.264 with minimal compression is essentially a 4:2:0 version of an 8-bit DNxHD. H.264 specs include 4:2:2, 4:4:4 and 10-bit profiles, but most implementations do not support these variants (hence the popularity of ProRes and DNxHD as intermediate codecs).

  • Kevin Duffey

    May 2, 2013 at 8:19 pm

    Thank you again for the info. More info than I can shake a stick at! I am still not quite sure about the whole YUV, Rec709 and RGB differences in terms of actual visual quality on say a 55″ HD screen or the big cinema screen.

    Question though, can DNxHD save in 4:4:4 quality? Or is it limited to 4:2:0 and that is why it’s losing anything over 100? Or am I totally misunderstanding this?

    Any links that you know of that may explain a bit more about this?

    Thank you.

  • Matt Campbell

    May 2, 2013 at 8:43 pm

    Interesting info. But I’m still struggling with this too. With DSLR and others shooing H.264 @ 420 color space, how is that better than transcoding to ProRes @ 422. I know you can’t add in quality if its only 420 to start, but don’t you at least gain more head room for correction and grading?

    Also, when importing footage into Premiere, can you control how it interprets the footage. Is there any preferences or settings similar to how Avid MC does. You can select RGB, or 601 SD/709 HD and computer RGB. I know with FCP, you could not do this, everything was 601 or 709.

    And with that, why would you want Premiere to upRez everything to 444 RGB? Why is this a good thing if your going to broadcast with 709 422 (YCbCr, or whatever it is).

    Just trying to wrap my head around this since, I’m in the process of transitioning to Premiere.

    OSX 10.7.5 with a 3.39 Ghz Intel Core i7 on a built up Hackintosh
    16 GB of RAM with OSX on SSD, (2) internal HDDs RAID’d 1 for project files and External RAID 5 for all project assets (media, GFX, stills, etc.)

  • Ryan Holmes

    May 2, 2013 at 8:53 pm

    [Kevin Duffey] “Any links that you know of that may explain a bit more about this? “

    Alex Lindsay from the Pixel Corps did some podcasts on this topic about 7 years ago called The Road to 1080p. It’s six parts I think, but in parts 1 and 2 he covers resolution, frame rates, and color space. It might help you get your mind around this stuff better.

    Part 1:
    https://www.channels.com/episodes/14309670

    Part 2:
    https://www.channels.com/episodes/show/12839422/The-Road-to-1080p-part-2-

    And yes, DNxHD has a 4:2:2 option and a 4:4:4 option (ProRes does as well).

    Ryan Holmes
    http://www.ryanholmes.me
    @CutColorPost

  • Ivan Myles

    May 2, 2013 at 9:19 pm

    DNxHD supports 4:4:4 (fully sampled) and 4:2:2 (chroma sub-sampled). The issue with truncating data above 100 IRE relates to color range and not chroma sampling.

    This white paper provides more information on DNxHD. I found this post discussing 4:2:2 vs 4:2:0 encoding. In summary, the difference is more important for interlaced video.

    Ultimate video quality depends on many factors beyond color space (hardware capability, calibration, file compression, etc).

  • Kevin Duffey

    May 2, 2013 at 9:42 pm

    Nice..thanks. I’ll take a look at the white paper. So transcoding to DNxHD from h264 or AVCHD.. what you’re basically saying is, the color of the original source may get truncated during the transcode, losing information. I then am wondering.. why and is this all transcoders, or is it different for every one? The reason I ask is..it seems to me again if DNxHD (and I suspect ProRes) are such industry standards, why would any software developer write the DNxHD encoder in such a way that it loses information during the transcode?

    So I should be safe if/when I can record with a BM Pocket Camera or Cinema Camera that records in DNxHD right?

  • Ivan Myles

    May 2, 2013 at 11:19 pm

    [Matt Campbell] “With DSLR and others shooing H.264 @ 420 color space, how is that better than transcoding to ProRes @ 422. I know you can’t add in quality if its only 420 to start, but don’t you at least gain more head room for correction and grading?”

    Provided there are no issues interpreting the color data of the source file as described above, the disadvantage of transcoding from H.264 to ProRes or DNxHD is the time and disk space required to create the files. There is potential for generational losses from an extra encode cycle, but the impact should not be significant.

    I’m not sure what you mean by headroom in this context. Effects will be processed the same regardless of whether the footage is H.264 or a ProRes/DNxHD copy of the original 4:2:0 data. If you plan to export an edited file for external color correction, then 4:2:2 and 4:4:4 exports are preferred over 4:2:0.

    [Matt Campbell] “Also, when importing footage into Premiere, can you control how it interprets the footage. Is there any preferences or settings similar to how Avid MC does. You can select RGB, or 601 SD/709 HD and computer RGB. I know with FCP, you could not do this, everything was 601 or 709.”

    601, half dozen of the other. 🙂

    I am not aware of this functionality; hopefully other posters will respond. FWIW, Interpret Footage does not include it.

    [Matt Campbell] “And with that, why would you want Premiere to upRez everything to 444 RGB? Why is this a good thing if your going to broadcast with 709 422 (YCbCr, or whatever it is).”

    Premiere works in both YCC and RGB, but the differences are negligible at 32bpc. 4:4:4 is better for internal processing because interpolation issues will be minimized when effects are applied to the clips.

Page 2 of 3

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy