Forum Replies Created

Page 1 of 9
  • Dennis Couzin

    August 3, 2010 at 1:22 am in reply to: What to use ProRes or ProRes HQ

    [gary adcock] “Sure is- Then explain to me how you can subsample RGB”

    Gary, what’s getting silly is that two men pictured side-by-side over the Creative Cow FCP forum gave opposite answers to the simple question: “Is the RGB really subsampled 4:2:0?” and no one notices.

    It’s also silly that you can write “Red contributes to the contrast but not the luma in video” on one day and the next day write “In the RGB color space all channels hold both Chroma and Luma.”

    You also have a whacky idea of subsampling which only allows chroma to be subsampled. Subsampling as commonly done amounts to the pixels being larger for some channels than for other channels (and the large pixels being comprised of the small pixels). The channels with the larger pixels are said to be subsampled. We agree that it is visually smart to subsample the chroma channels, keeping the luminance channel pixels small. But this doesn’t make other subsampling impossible. If a camera made the red pixels double the size of the green pixels and the blue pixels double the size of the red pixels this wouldn’t be terribly stupid from a visual standpoint, and it would reduce bandwidth (or filesize) by 42%. We don’t have a notation for such R’G’B’ subsampling. Such notations as 4:2:0 were developed for Y’CbCr specifically chroma subsampling. Hence my original questions to Garchow. But more is allowed in this world than you wish to imagine.

  • Dennis Couzin

    August 1, 2010 at 10:16 pm in reply to: What to use ProRes or ProRes HQ

    This is getting silly. Jeremy Garchow mentioned “4:2:0 RGB source material” and I replied with the question “Is the RGB really subsampled 4:2:0?” Jeremy responded: “How else would it be?” Gary Adcock responded to my same question: “There is no such thing as subsampled RGB”. I hope Jeremy and Gary will reach an agreement.

    To say there is no such thing as subsampled RGB is an overstatement. Such a thing, in principle, could be as I described it although it wouldn’t be smart visually. Also every Bayer filtered RGGB sensor involves a kind of RGB subsampling. There are two questions in my original question: What does 4:2:0 RGB mean?; What image formats, if any, use it?

    [Dennis Couzin] “Red is a significant contributor to luminance and therefore to visual sharpness.”

    Gary disagrees with both parts of that statement.

    [Gary Adkins] “Red contributes to the contrast but not the luma in video. Much like the human eye, the contrasty nature of a red light and its rather short wavelength have the effect allowing you to “see” more, but in reality you are seeing is the edge contrast rather and actually luminance.”

    This is plain false, both for video and for vision. Just look at the definition of video luma:

    Y’ = 0.299 R’ + 0.587 G’ + 0.114 B’

    R’, which is gamma corrected red, has a significant weight in the calculation. Likewise red contributes significantly to visual luminance. If red images on your RGB monitor look dark to you I’m sorry you’re prejudiced. Consider why yellow looks look so light on your RGB monitor. This yellow consists of the monitor’s red plus its green. This yellow’s luminance equals the sum of the red’s luminance and the green’s luminance. So if the yellow looks significantly lighter than the green, the red must have significant luminance.

    Does the false belief that red doesn’t contribute to video luma come from use of the poor notation Y’CbCr?

  • Dennis Couzin

    July 28, 2010 at 10:54 pm in reply to: What to use ProRes or ProRes HQ

    Jeremy, sorry for insinuating that you dislike the Y’CrCb notation as I do. I find the symbols Cr and Cb inapt. Cr is no more about red than about green. Cb is no more about blue than about yellow. So even in a knowledgeable discussion at xlinx.com there comes this terribly sloppy statement: “Engineers found that 60 to 70 percent of luminance or brightness is found in the ‘green color’. In the chrominance part Cb and Cr, the brightness information can be removed from the blue and red colors.” Doug Kerr explains how the Cb,Cr notation came about: “The calculation of the analog quantities U and V underlying Cb and Cr involve B and R, respectively, thus the notation Cb and Cr.” This doesn’t mean, for example, that the quantity V involves the quantity R to the exclusion of the quantity G. It only means that the simplest writing of the formula for V uses the letter “R” without using the letter “G”. But the quantity G is contained in the quantity Y’ whose letter is used in that formula. Bone-headed engineers. Bad notation sows nonsense, including nonsense justifying the notation.

    Sorry I forgot that the original poster was talking about RGB source material, not YUV. Is the RGB really subsampled 4:2:0? Does this mean that there is Green data for every pixel but Blue and Red data for alternating 2×2 blocks of pixels? That’s a horrible subsampling from the visual standpoint and another example of the identification of Green with luminance even though Red is a significant contributor to luminance and therefore to visual sharpness.

  • Dennis Couzin

    July 27, 2010 at 7:21 pm in reply to: What to use ProRes or ProRes HQ

    [Jeremy Garchow] “Why wouldn’t it?”
    The question was whether ProRes might do RGB-to-YUV conversion differently from ProResHQ. I assumed simplicity. A YUV codec crunching some RGB stuff first converts it to YUV and then compresses it. The color space conversion is multiplication by a 3×3 matrix. Why should a codec aiming at a greater compression use a different matrix than a codec aiming at lesser compression uses? What’s gained by changing the numbers in the matrix?

    I can imagine a more agressive codec chiselling more on the color subsampling over large parts of the image and even on the bit-depth over large parts of the image but what could be accomplished by deviating from the RGB-to-YUV conversion parameters in the unbusy parts of the image where color and tonality are most visible? Could a hyper-agressive codec clip the range of Y, U, or V? That would require a messy coding for decoding and would be a visual disaster for very little gain.

  • Dennis Couzin

    July 27, 2010 at 6:44 pm in reply to: What to use ProRes or ProRes HQ

    Gary,
    Item 1. I was wrong here. The White Paper does say ProRes4444 supports 12-bit and does not say this is an option. If ProRes4444 is a 12-bit codec, then it is doing an extreme chiselling of the image in order to stream (without its alpha channel) at just 150% the rate of ProRes422HQ. Going from 422 to 444 makes 150%. Going from 10-bit to 12-bit makes 120%. Together that’s 180%. This is puzzling. This should be discussed in a separate strand about compression.

    Item 2. I’ve witnessed smart people referring to Cr as a red component and to Cb as a blue component and have come to believe that SMPTE made a poor choice of terminology. Notice that Jeremy Garchow also uses the YUV terminology in this strand. When the purpose is just to contrast with RGB, it’s preferable.

    Item 3. The statement of yours I challenged was:
    [gary adcock] “Using the HQ codec only offers advantages if you are planning on doing heavy effects or corrections within the ProRes codec or the FCS3 suite, and if you are you should be working in the lossless 4444 version, You gain nothing going to HQ if the compositing is being done outside of the FCP/ Color workflow, nothing at all.”

    Item 4. I’ve never used one — too damned heavy — but I’ve hung around a RED camera shooting session and also edited/manipulated some raw RED output. By “output” I mean the files stored on the drive, not the irrelevant video output. For clarity I should have said “capture”. The camera captures a kind of quasi 4:2:2 YUV. You’re right that the RED’s capture being more than 10-bit commands the 12-bit codec even if the codec is RGB rather than YUV.

    Item 5. The statement of yours I challenged was the same one quoted above. So long as we agree that ProRes4444 is far from lossless these verbal disputes don’t matter.

  • Dennis Couzin

    July 26, 2010 at 8:49 pm in reply to: What to use ProRes or ProRes HQ

    ProRes4444 is 10-bit RGB+alpha. If you look at the data rate for “ProRes444” (that is, ProRes4444 without the alpha channel) it’s exactly 150% of the ProRes(422)HQ data rate. In other words, ProRes444 does the same degree of compression as ProResHQ except it does it on full RGB instead of on 4:2:2 subsampled YUV — this explains the 150%. (Allow me to write YUV instead of Y’CrCb, since the terms Cr and Cb tend to be misunderstood.)

    If the original material is 4:2:2 YUV, as it is in Anthony DeRose’s original example, why does Gary Adcock suggest that transcoding this to 4:4:4 RGB (ProRes4444) is better than transcoding it to 4:2:2 YUV (ProResHQ)?

    REDCode material, derived from the RGGB Beyer filtered sensor, is not straight RGB. If RED patent application 12/422,507 is to be believed, the camera outputs a kind of quasi 4:2:2 YUV. So here too, why is transcoding it to 4:4:4 RGB (ProRes4444) better than transcoding it 4:2:2 YUV (ProResHQ)?

    Why does Gary Adcock call ProRes4444 lossless? It is 5.7:1 compressed versus the corresponding uncompressed original. Again, the Apple White Paper is honest about this, describing its quality headroom as “very high, excellent for multi-gen. finishing.” It uses the same words for ProResHQ and explains why.

  • Dennis Couzin

    July 26, 2010 at 4:56 pm in reply to: What to use ProRes or ProRes HQ

    Jeremy Garchow: “Dennis, can you explain more as I’m not following you? ProRes is not CBR. I follow your math logic, but that’s not all of the picture. You skipped the 8bit RGB to 10bit YUV conversion and whether or not ProRes or HQ is equal in that regard.”

    Concerning ProRes being VBR, according to the Apple White Paper of July 2009 “the variability is usually small”. Concerning 8 bit to 10 bit conversion, it adds 25% of data to the ProRes stream, strengthening my numerical argument. 10 bits lets us ignore the rounding errors in the RGB to YUV conversion. Why would ProRes do this differently from ProRes HQ?

    For reference, compare ProRes data rates with the uncompressed 10-bit 4:2:2 data rate. Uncompressed requires 1244 Mb/s for 1080 30p. (Or 1327 Mb/s if it uses 32 bits to convey 30 bits as it does in the .mov file.) So ProRes (147 Mb/s) does 8.5:1 compression and ProRes HQ (220 Mb/s) does 5.7:1. These are significant rates of compression. Recall that DV25 did 6.6:1 compression (relative to uncompressed 8-bit 4:2:2 for the corresponding number of pixels). Obviously data rates don’t completely describe different image compressions but they are indicators. The Apple White Paper discusses ProRes vs. ProRes HQ quality very honestly.

    Transcodings involve the interactions of two different codecs. It is a safe conclusion that it is desirable to transcode to a codec that uses less intraframe compression than the original.

  • Josh Litle: “They are (in theory) mathematical operations based on the actual pixel values present in the files.”

    They are mathematical operations, but unfortunately they are not both based on the actual pixel values present in the files. The uncompressed 10-bit 4:2:2 file contains actual pixel values. (I’ve viewed and played with the values for uncompressed 8-bit in a binary editor and it’s fun.) But the ProRes file does not contain actual pixel values. That file needs to be interpreted (decoded) to produce pixel values. The decoder can play games with little details like clipping, tone compression, even gamma.

    In line with similar experiments I’ve done, I suggest you operate on a grey scale and read the RGB values (in Photoshop) after each step. At least you’ll determine the nature of the difference. Then you must determine if it’s real or an artifact of your tools.

  • Dennis Couzin

    July 26, 2010 at 4:56 am in reply to: What to use ProRes or ProRes HQ

    It’s no so simple. Not all H.264 has been smushed to the point where ProRes and ProRes HQ transcodes are indistinguishable. It depends on the bit rate of the H.264. If it is 1080 30p and exceeds 30 Mb/s then definitely go with HQ.

    Don’t be tricked by direct comparison with the ProRes data rate 147 Mb/s (for 1080 30p). ProRes uses no interframe compression. A codec like H.264 uses interframe compression and gains a compression factor of between 5 and 10 for this. That is, intraframe, ProRes at 147 Mb/s is comparable to H.264 at somewhere between 15 and 30 Mb/s. Intraframe, ProRes HQ at 220 Mb/s (for 1080 30p) is comparable to H.264 at somewhere between 22 and 44 Mb/s. It is desirable to transcode to a codec that uses less intraframe compression than the original.

  • Mind your B’s and b’s:

    200 MB/1069 sec = 0.1871 MB/s

    0.1871 MB/s = 1569 kb/s

    1569 kb/s – 128 kb/s = 1441 kb/s

    You used an incorrect MB/s to kb/s conversion which multiplied by 8192. One MB is 1024^2 Bytes. One kb is not 1024 bits, but exactly 1000 bits. (Conventions switch as you switch from the computer context to the data transmission context.) Therefore you must multiply MB/s by 8388.6 to find kb/s.

Page 1 of 9

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy