OK, if Apple means by “32 bit color” 24 bit R,G,B color plus an 8 bit alpha channel, I have some questions.
Doesn’t 10 bit uncompressed 4:2:2 video, which has 10 bits for each of Y, U, and V, produce better color (or more colors) than 32 bit color?
Alpha masks are only used in effects. Why is codec “None” offering 8 bits per pixel for alpha masking when almost no video has alpha masking? And why doesn’t codec “None” offer standard 1 bit alpha masking?
Video is normally coded Y,U,V, which makes subsampling so efficient. Perhaps codec “None” videos can’t be subsampled. Are all the other codec “None” depth settings coding R,G,B? This matters, since 24 bit depth distributed 8 to each of R,G,B yields different colors (though the same number) than 24 bit depth distributed 8 to each of Y,U,V does.
I expected Apple codec “None” to be something fundamental. From the details in the original post, it seems something amateurish and screwy.