-
funny codec named “None”
Drop any movie into Compresssor, and pick one of the Advanced Format Conversions.
Open Inspector for this job and go into Settings / Compression Type.
There is one choice called “None”. Choose that.
Now you can set “Depth” at 9 different descriptors from “Black and White” to “Millions of Colors +”.
And you can set “Quality” on a continuous scale from “Least” to “Best”.
Inspector shows what these settings do, more or less.Inspector shows the following pixel depth values for the “Depth” choices.
Black and White: 1
4 Grays: 34
4 Colors: 2
16 Grays: 36
16 Colors: 4
256 Grays: 40
256 Colors: 8
Millions of Colors: 24
Millions of Colors +: 32Obviously 4 grays is 2 bits. It seems Apple has coded the pixel depth for grayscale choices by adding 32 to the true value. OK, they’ve abuse of the term “pixel depth”. And for consistency they should call Black and White 33 bits deep.
You might be curious which 4 colors you get with that choice. It turns out that Compressor makes a 4 graytone image when you chose “4 Colors”, exactly the same as “4 Grays”.
“16 Colors” is however very interesting. If you set Quality on the low side, you will see about 10 colors. And you see about 4 gray tones plus black plus white. Those are the 16 colors. Quite weirdly, as a white title fades out it goes through 4 grays and then a light brown and then black. That light brown taking the place of the darkest grey must be a coding error. (Actually several of the 10 colors are wildly far from what they replace. This is not a serious attempt at 4-bit color.)
Using the same “16 Colors” setting with Quality set on the high side gives a completely different result. There appear to be hundreds of realistic colors and grays. If you look closer there are just the set of 16 but they are arranged in small clumps of identical pixels. So with codec “None”, the Quality control, which according to Inspector is spatial quality on a scale from 0 to 100, is actually put into the service of reproducing color. If you think about it, almost all colors could be reproduced using just red, green, blue, and black clumps of variable size. Millions of colors achieved with just 2 bit pixel depth, but lots of pixels. If this is cheating, then what is codec “None” doing?
Another oddity with the “16 Colors” setting is that the (spatial) quality slider yields exactly the same result (10 recognizable colors) between 0 and 49. At 50 it jumps to the pointillist color (looking like hundreds of colors), and this stays exactly the same to 100. Thus the continuous 0-100 scale is really a binary step. Whose idea was that?
The files for the two pictures have exactly the same size, because they have the same number of pixels and 4 bits for each. The number checks exactly. It’s a 30 sec PAL video: 30 * 25 * 720 * 576 * 4 = 1244160000 bits = 155520000 bytes = 148.3 MB. Codec “None” is 4:4:4.
Codec “none” seems to behave OK for 256 Grays, 256 Colors, and Millions of Colors. The latter is presumably 8 bits for each of Y, U, and V.
The great unknown is the 32 bit thingie at the end of the list. What does “Millions of Colors +”, more marketing than scientific language, mean? Does it mean more colors than “Millions of Colors” made, or does it mean the same colors “Millions of Colors” made and some non-color bonus?
There’s internet speculation that 32 bit color is simply 24 bit color with 8 bits of other stuff. I don’t buy this. Who knows how codec “None” understands 32 bit color?