Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Creative Community Conversations BMCC alternate workflow

  • Walter Soyka

    September 11, 2012 at 4:21 am

    [Jeremy Garchow] “The “proper color management” would have been to have shot it in 709 to begin with. After that, anything you do will be diverging from the original file.”

    I do agree that the 5D MkII should have used Rec. 709, but we don’t have much control over that.

    We need to define color for the purpose of this conversation. A color is not a specific RGB value; it’s what’s represented by a specific RGB value in a specific color space.

    Color management is all about accuracy and consistency. It works by identifying the relationships between actual color and specific RGB values in various color spaces, and then doing math behind the scenes to preserve that actual color by changing RGB values as necessary when transforming from one space to another.

    Let’s use a real example — the Creative COW orange (as observed on Bessie’s snout above): RGB [255,156,0] in both sRGB and Rec. 709. That same orange is RGB [255,155,4] in Adobe’s SDTV NTSC profile, [232,155,36] in Adobe RGB, [240,157,68] in DCI P3 Neutral at D55, and [255,152,41] in my MacBook Pro’s custom display profile.

    To accurately see the color that sRGB represents as [255,156,0], my graphics card has to send [255,152,41] to my monitor. Same orange, but different RGB values, depending on the color space.

    When we talk about interpretation, we mean specifying the profile with which the colors in the asset are encoded so we know what actual colors the RGB numbers are supposed to represent.

    When we talk about transforming a color from one space to another, we actually (and perhaps confusingly) mean keeping the perceived color the same. The transformation is mathematical, not perceptual, changing the RGB values as necessary to keep the displayed color the same (see above for how one orange has different RGB values in different color spaces).

    So if we interpret files with the wrong color profile, or if we don’t manage at all going from one profile to another, then by preserving the original file’s RGB data, we are in fact diverging from the original file’s intended actual colors.

    In the real world, the 601/709 difference is very small — probably imperceptibly small in all but the most saturated colors. Reading a little more on the Canon DSLR stuff, the huge difference when using 5DtoRGB difference is more likely due to the fact that Canon’s wacky H.264 files are encoded full range, not video range.

    [Jeremy Garchow] “Besides, I thought the older Canon stuff was RGB anyway, not YCrCb.”

    You can’t do chroma subsampling with RGB-stored data, which is one of the best ways to reduce the data rate for visuals. Chroma sub-sampled H.264 such as Canon’s is actually YCbCr, though most (all?) H.264 encoders expect RGB and transform to YCbCr as the first step of encoding and most (all?) decoders transform the YCbCr back to RGB.

    Walter Soyka
    Principal & Designer at Keen Live
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    RenderBreak Blog – What I’m thinking when my workstation’s thinking
    Creative Cow Forum Host: Live & Stage Events

  • Jeremy Garchow

    September 11, 2012 at 4:53 am

    [Walter Soyka] “Let’s use a real example — the Creative COW orange (as observed on Bessie’s snout above): RGB [255,156,0] in both sRGB and Rec. 709. That same orange is RGB [255,155,4] in Adobe’s SDTV NTSC profile, [232,155,36] in Adobe RGB, [240,157,68] in DCI P3 Neutral at D55, and [255,152,41] in my MacBook Pro’s custom display profile.”

    OK, but where did Bessie’s orange start? Was it recorded as 601? 😉

    Again, I’m not smart enough.

    [Walter Soyka] “To accurately see the color that sRGB represents as [255,156,0], my graphics card has to send [255,152,41] to my monitor. Same orange, but different RGB values, depending on the color space.”

    This is my point. If you NEED to transform to a differing display/output, then color management makes sense.

    In the case of Rafael’s specific questions, there is no need to transform, unless of course you want to. If you want tp, you will blow the color management though as it will now not be consistent. Isn’t that the point, to keep orange the same color no matter where you go?

    If you do transform, it will look differently in Ae, than it will in FCP or even Pr. Why would you ever want to do this in this specific example? I know when it is necessary, I just don’t see it necessary to Rafael’s original questions.

    And finally, do you profile SD material in an HD timeline? Why or why not?

  • Walter Soyka

    September 11, 2012 at 5:42 am

    [Jeremy Garchow] “OK, but where did Bessie’s orange start? Was it recorded as 601? ;)”

    That’s a great question. In my example, I assumed sRGB (since PNGs can’t be tagged).

    [Jeremy Garchow] “In the case of Rafael’s specific questions, there is no need to transform, unless of course you want to.”

    Right. And if Rafael wants the colors as intended by the camera, he must transform. However, having read more about the camera, I don’t think that just using a 601 profile will be sufficient (see above and below on range).

    [Jeremy Garchow] “If you want tp, you will blow the color management though as it will now not be consistent. Isn’t that the point, to keep orange the same color no matter where you go?”

    There’s consistency, and there’s accuracy. You can be consistent without being accurate if you are consistently wrong.

    I agree with you that having the same wrong orange everywhere is way better than having a right orange in one place and wrong oranges elsewhere. One bad orange does spoil the whole bunch.

    Being both consistent and accurate is not a bad goal, and transforming the funky-color files properly to Rec. 709 may allow you to be both consistent and accurate.

    [Jeremy Garchow] “If you do transform, it will look differently in Ae, than it will in FCP or even Pr. Why would you ever want to do this in this specific example? I know when it is necessary, I just don’t see it necessary to Rafael’s original questions.”

    That’s because Ae does it right and FCP7 and Pr do it wrong. If you “burn in” the transform to 709 when you transcode, and everything else you have is 709, then you can just use an unmanaged workflow thereafter.

    Check out some 5DtoRGB demos. The difference between using 5DtoRGB to decode the H.264 and QuickTime to decode it is staggering. This is hugely larger than the difference I would expect to see between straight up Rec. 601 and Rec. 709. As I mentioned before, I think the full range vs video range issue in decode is so much more significant than the 601/709 issue that it’s not even funny. That said, I am not an expert on the 5D MkII color science by any means, and I don’t know what’s going on under the hood aside from what I’m reading tonight on the internet.

    [Jeremy Garchow] “And finally, do you profile SD material in an HD timeline? Why or why not?”

    No, because I am lucky enough to not have to deal with SD in HD timelines.

    Going back to the 2011 SuperMeet, I was really stoked about getting color management plus 32b FP linear processing in FCP Awesome, because color management and compositing in NLEs was still so painful. Of course, if your NLE makes color management easy — why not do it right?

    For compositing, 3D, and mograph, I do usually work in a color-managed pipeline (often linear). I want color to be as consistent as possible across a project, no matter what platform, machine, or software I’m working with.

    Sadly, all bets are off for final display outside our little bubbles.

    Walter Soyka
    Principal & Designer at Keen Live
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    RenderBreak Blog – What I’m thinking when my workstation’s thinking
    Creative Cow Forum Host: Live & Stage Events

  • Rafael Amador

    September 11, 2012 at 9:38 am

    [Walter Soyka] “[Jeremy Garchow] “Besides, I thought the older Canon stuff was RGB anyway, not YCrCb.”

    You can’t do chroma subsampling with RGB-stored data, which is one of the best ways to reduce the data rate for visuals. Chroma sub-sampled H.264 such as Canon’s is actually YCbCr, though most (all?) H.264 encoders expect RGB and transform to YCbCr as the first step of encoding and most (all?) decoders transform the YCbCr back to RGB.”
    Right, that is why when you try to edit H264 (YCbCr) on FCP and you conform the sequence to the footage, the sequence gets “Render in 8b RGB”.

    And if FCP treat H264 as RGB we can be having issues of Whites/SuperWhites. Video range vs Full Range as Walter points here:

    [Walter Soyka] “Check out some 5DtoRGB demos. The difference between using 5DtoRGB to decode the H.264 and QuickTime to decode it is staggering. This is hugely larger than the difference I would expect to see between straight up Rec. 601 and Rec. 709. As I mentioned before, I think the full range vs video range issue in decode is so much more significant than the 601/709 issue “
    rafael

    http://www.nagavideo.com

  • Jeremy Garchow

    September 11, 2012 at 2:40 pm

    [Rafael Amador] “And if FCP treat H264 as RGB we can be having issues of Whites/SuperWhites. Video range vs Full Range as Walter points here:”

    I was wrong.

    I think that the color space was sRGB which makes it close to 709 in chromacity…?

    Overall, It is still closer to 601 due to pixel count or something? I have to find that article.

    I’ve done tests with 5dtorgb with every single combination.

    When compared to logged and transferred footage, it either looks very much the same (using a 709 matrix causes a very slight shift in the red range) or it ends up much darker as is the gamma is off.

    I have an older version of 5dtoRGB and the results are even worse, and the controls are very different.

    I want to believe 5dtoRGB but I just can’t find how it’s doing anything different than the log and transfer plugin does already, or what Pr does natively, smoothing actions aside.

    Adobe actually changed the code for mkIII clips to reflect the new 709 space now used by Canon.

    I believe my eyes guys. If you can physically show me how transforming these clips makes a difference, I’ll hear you out.

    For now, I can place that movie in any software that I use without touching it, and it looks the same. Color managing it in Ae would only serve to change it for no reason, just as a different setting in 5dtoRGB will bake in a look I don’t want. I know what you guys are saying, ideally you’d want everything to be “right” from the start, so do I.

    DSLR footage is technically not right from the start, but in my practice and experience transforming it does nothing but sour the pipeline as you toss consistency out. The damage has been done.

    I never use QuickTime Player to transcode anything, so there’s that caveat.

  • Rafael Amador

    September 11, 2012 at 3:20 pm

    [Jeremy Garchow] “‘ve done tests with 5dtorgb with every single combination.

    When compared to logged and transferred footage, it either looks very much the same (using a 709 matrix causes a very slight shift in the red range) or it ends up much darker as is the gamma is off. “
    If the picture gets darker when you set “Video Range”, that’s normal.

    [Jeremy Garchow] “I want to believe 5dtoRGB but I just can’t find how it’s doing anything different than the log and transfer plugin does already, or what Pr does natively, smoothing actions aside. “
    For my self, the more interesting function of 5DtoRGB, is the Chroma re-sampling.
    As you say ‘the damage is done”, but a wise Chroma interpolation can helps to mitigate the damage.
    Video “plastic surgery’ is part of our job.
    rafael

    http://www.nagavideo.com

  • Walter Soyka

    September 11, 2012 at 3:29 pm

    [Jeremy Garchow] “I believe my eyes guys. If you can physically show me how transforming these clips makes a difference, I’ll hear you out.”

    I believe the math, and the math says it makes a difference.

    However (and this is a very big however), that difference may be small enough to not be worth the trouble in all but the most color-critical applications.

    Anyone got some good footage (any format) they can share with a single frame or small series of frames that shows a fairly broad gamut? I could do a test later to illustrate the difference and I don’t want to use a synthetic image here.

    Walter Soyka
    Principal & Designer at Keen Live
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    RenderBreak Blog – What I’m thinking when my workstation’s thinking
    Creative Cow Forum Host: Live & Stage Events

  • Jeremy Garchow

    September 11, 2012 at 3:32 pm

    I was going to offer some. I can get it to you later.

  • Walter Soyka

    September 11, 2012 at 3:41 pm

    [Jeremy Garchow] “I was going to offer some. I can get it to you later.”

    Cool, I’ll keep an eye out for it. Thanks.

    Walter Soyka
    Principal & Designer at Keen Live
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    RenderBreak Blog – What I’m thinking when my workstation’s thinking
    Creative Cow Forum Host: Live & Stage Events

  • Jeremy Garchow

    September 12, 2012 at 1:26 pm

    Here is that article I mentioned.

    https://colorbyjorg.wordpress.com/2011/01/14/canon-dslr-video-uses-bt-601-sd-matrix-instead-of-bt-709-hd/

    Rafa, I saw your new post above.

    I appreciate you taking the time to put that together, but I understand the fundamental differences in the specs.

    What is confusing to me is what you and I might think is “correct” or “wrong”.

    To me, correct is consistent. As long as the footage remains consistent throughout, I can manage it. Having to constantly change the “interpretation” on every single application doesn’t make sense to me in regards to broadcast video. Even 5DtoRGB is consistent with every other application unless of course you like to change it for the worse! 🙂

    All the applications I have used so far know how to handle older Canon footage to maintain consistency. My argument stands that you don’t have to change anything.

    Jeremy

Page 9 of 10

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy