Dennis Couzin
Forum Replies Created
-
Gary Adcock, what you wrote was: “the limit for a “video” signal stands at what it is and I do not foresee it changing in my lifetime since humans do not have the visual acuity to see EVEN 1024 levels of gray at one time.” The logic of this statement presumes that since humans can’t distinguish 1024 grey levels they can’t distinguish ANY of the steps in the linear 1024 grey levels. They can. Linear vs. nonlinear scales is fundamental, not a minutia, for digital video.
-
[gary adcock]: “humans do not have the visual acuity to see EVEN 1024 levels of gray at one time.”
This remark overlooks the fundamental weakness of linear coded intensity levels. Yes, it is true that humans can’t discriminate 1024 levels of gray at one time. But the 1024 levels 0,1,2,… 1023 used in linear coded video do allow some discriminations.For example, while it is absolutely impossible to see the difference between the 1022 level and the 1023 level, or even between the 500 level and 501 level, it is easy to see the differences in the first 10 or 20 steps. (See how CIE Publication 15 defines Y* to understand this.) This is why linear coded intensity is wasteful of bandwidth. The linear steps are much too close at the high end and not close enough at the low end.
Fine tonal discrimination is even easier when one tone is moving over the other. It is because of motion that digital video requires closer tonal spacing than digital still images require.
-
Rafael, I beg to differ. Compressor converts (transforms) clips from one codec to another. I drop in DV coded material and Compressor spits out None coded material. The DV codec didn’t exist when the None codec was written, so Apple Compressor must be responsible for these results. And I’m sure there’s a flaw in the way Compressor makes None when “16 Colors” (4-bit depth) and quality less than 50 are selected. A grey scale in the original becomes a grey scale EXCEPT FOR one light brown stuck in the scale where you’d expect a dark grey. This is either a boner or a joke, and it makes me wonder about Apple’s competence or seriousness.
-
[Gary Adcock]: “YUV video is more analogous to the ICE-LAB colorspace than the RGB space, and if you did actually know these color spaces you would understand that RGB info is indicating how many colors as Chroma that can be recorded while YUV bit depth is determined by levels of Luma that can be captured.”
Sorry Gary, but I actually do know some color science, and RGB makes a hilariously distorted color space in the sense that distances in this space have hardly any relation to perceptual distances between colors. Thus the number of colors determined by the number of bits of R,G,B is no indication of the richness of the color gamut. On the other hand LAB and LUV spaces are attempts (by CIE in 1976) to model the perceptual color space. The number of bits of L,U,V does indicate the richness of the color gamut.
Do not confuse ‘chroma’ with color. Color space is 3-dimensional, whether the dimensions are R,G,B or L,U,V. If you doubt that luminance is a dimension of color then explain why a brown and a yellow can be different colors despite having identical chroma. -
Oh, so it’s more than display. Avid has a better DV-to-uncompressed 4:2:2 converter than FCP. DV compression being lossy, it makes sense that there are various ways to decompress it, some making less visual loss than others. Converting the 4:2:0 of DV-PAL to 4:2:2 only partially undoes the color subsampling, however. Something else, maybe the display, has to produce 4:4:4 ultimately.
A moral of my little experiment might be that digital display is where the funniest tricks get played. For example identical signals going into two displays can produce very different images, with different motion qualities. What displays (and projectors) do to video isn’t normally specified or even named.
It’s a pity if Apple offers inferior conversion out of DV. My experiment showed small (tonal) differences between the Quicktime made by FCP straight from the rendering and that using Quicktime conversion. I don’t know if there are other differences, and I didn’t try Compressor’s conversion which could be different again. (Is software amateurism Apple’s charm?) Are we sure that all three Apple routes from DV-PAL to 8- or 10-bit uncompressed 4:2:2 need Nattress help?
Do you recommend the Nattress Chroma Smooth/Sharpen filters just for FCS Color or also for FCP? (I’m still using FCP 5.1.4 and don’t have Color.)
I can’t grasp how the Nattress filter can be applied before the FCP/Quicktime/Compressor conversion or after it. If the filter is used first, then the DV isn’t DV anymore for the FCP/Quicktime/Compressor conversion. If the filter is used second, then the 4:2:2 is already done and the Nattress filter has nothing to do. Do Nattress filters work in conjunction with FCP/Quicktime/Compressor. That would be nifty. Mr. Nattress would have to know FCP inside out.
-
Rafael, the next stage of the experiment gave happy results.
Compressor was used to make an .m2v from each of the 6 Quicktimes described in the first post.
MPEG Streamclip was used to play the .m2v’s.
Grab and Corel Paint were used as before.Below is a table of results.
First column is the original bitmap.
Second column is the m2v made from the DV Quicktime.
Third column is the m2v made from the None Quicktime.
Fourth column is the m2v made from either the 8-bit or 10-bit uncompressed 4:2:2 Quicktimes (made directly with rendering).
Fifth column is the m2v made from either the 8-bit or 10-bit uncompressed 4:2:2 Quicktimes (made with Quicktime conversion).BMP DV None 8/10 8/10 conv
0….. 0…… 0…… 0…… 0
10… 9…… 9…… 9….. 11
30… 30… 29… 30… 30
55… 55… 54… 55… 55
90… 90… 89… 90… 90
125 125 124 125 125
160 160 158 160 160
195 193 192 193 193
225 226 224 226 224
245 243 242 243 245
255 255 255 255 255The DV gamma boost does not occur. The m2v made from the DV is exactly the same as the m2v made from the 8- or 10-bit uncompressed Quicktimes (made directly with rendering). All six m2v’s are very like the original bitmap on the greyscale. Likewise for color tinted scales.
Thus FCP conversion of DV to uncompressed formats does not appear to introduce color or tonal compromise. Relatively washed out appearing conversions are artifacts of players. It’s funny that Avid diddles the DV for display even more than Quicktime does. What’s the point? To see how good the DV can look in ideal display?
-
Thanks Rafael. DV codec: what a ball of surprises. There’s the coded image and there’s a decoder to make it a pixel-by-pixel displayable form. Apparently, when DV is decoded for display it’s with a 1.1 gamma boost. When the same DV is converted to an uncompressed format this also involves decoding the DV, but without the gamma boost. So the decoder includes a “purpose” switch. What happens when you decode the DV in the process of making an mpeg2? Tomorrow’s experiment.
-
Dennis Couzin
February 23, 2009 at 2:07 pm in reply to: an arithmetic problem about uncompressed 4:2:2DRW, as I wrote, the answer turned out not to be interesting. Please try to appreciate how an outsider — experienced in image technology but not in video — sees the swarm of competing codecs and their messy incomplete descriptions.
-
Dennis Couzin
February 22, 2009 at 4:16 am in reply to: an arithmetic problem about uncompressed 4:2:2I’ve learned the answer to the arithmetic problem. Not interesting; it has to do with computers, not video. The uncompressed 10-bit 4:2:2 video is being stored using 32-bit data units to hold 30 bits of information. Thus 1/16 of the file size is waste. The pixels with just Y data must be grouped in threes.
-
Thanks Rafael Amador for the link to onerivermedia. So codec “None” has a respectable history. It was used with with large color depths. I wonder if the writers of Compressor added the several small color depth realizations of “None” as novelties. Someone certainly screwed up the 4-bit color realization, as detailed in my original post.