Activity › Forums › VEGAS Pro › What is the difference between 32-bit (video levels) and 32-bit (full range)?
-
What is the difference between 32-bit (video levels) and 32-bit (full range)?
Alex Ka replied 9 years, 1 month ago 4 Members · 16 Replies
-
Levan Katsadze
September 26, 2014 at 7:47 pm[Norman Black] “If you watch your final rendered video on TV or computer or basically anywhere, it will be in an 8-bit video format. Your PC video card and monitor are probably incapable of 10+ bit display.”
Yes, maybe the final render will be in 8-bit format, but this video shows that even if the final is in 8-bit codec, the 32-bit editing gives us much more precise and much better results.
https://www.lynda.com/home/Player.aspx?lpk4=30903
.
.
. -
Norman Black
September 26, 2014 at 7:59 pmYes, I watched that video. Have fun with your 32-bit.
It has its applications, but they are very few without high bit or HDR input. Unless you do something exactly like the second example, what are you getting out of 32-bit. YOu actually can get some of that in Vegas without 32-bit. In 8-bit Vegas allows video values outside the “legal” video levels range, aka full range, and you can bring them back into that video levels range.
Sophisticated compositing, of certain things, is an application for 32-bit. Is that what you are going to do in Vegas?
-
Levan Katsadze
September 26, 2014 at 9:12 pm[Norman Black] “Sophisticated compositing, of certain things, is an application for 32-bit. Is that what you are going to do in Vegas?”
Well, why not? maybe not now, but actually I want to learn things deeply, because I love video editing and generally I love digital industry. Well, maybe it’s better to go to special school of video production, but in my country, there is no such thing. My country, Georgia, actually has just started making steps into digital industry. So, internet is my best friend when I sit on the computer.
-
Levan Katsadze
November 20, 2014 at 10:52 amAnd, I have just one more question. It is actually technical, and maybe not important, but please tell me, why most of the TVs and many display monitors doesn’t support 0-15 and 235-255 video playback? Why they support only 16-235 ??? Why? what is the point of it?
.
What is the point of any technical limitation?I understand, that if technology is not advanced enough to make something, then it is not made, but I know that technology is already advanced and manufacturers can make displays, that support full video levels, so I don’t understand, why they don’t make it? Why they limit it? What’s the point if it?
.
.
. -
John Rofrano
November 20, 2014 at 11:55 am[Levan Katsadze] “I understand, that if technology is not advanced enough to make something, then it is not made, but I know that technology is already advanced and manufacturers can make displays, that support full video levels, so I don’t understand, why they don’t make it? Why they limit it? What’s the point if it?”
The point is backward compatibility. The TV specification hasn’t changed much since the 1920’s when it was invented. Color TV was contrained by what Black & White TV’s could produce so that people with B&W TV’s could watch color broadcasts too. That’s the same reason all broadcast is still interlaced even though most TV’s don’t require it. Backward compatibility has held the entire broadcast industry back for almost 100 years now.
~jr
http://www.johnrofrano.com
http://www.vasst.com -
Alex Ka
April 4, 2017 at 7:53 pmHey Levan,
I felt more confusion reading the ensuing discussion vs. your straightforward question.
The short answer, good for 95% of cases, is: 32-bit floating point (video levels).
Why not 8 bit:
- Quality wise, much less risk of banding as a result of color manipulation.
- Speed penalty wise, it’s 2017 (but has been true for years): rendering is done via GPU (and even if CPU, those are fast).
Why not 32 bit full range:
Lots of legacy reasons. Many DSLRs and recorders will even have the vids tagged full range in the container, but the data is video levels. How can you check: in Vegas top menu, pick View -> Window -> Video Scopes. Select Histogram. It’ll chart the output brightness levels in the current frame from darkest (left) to brightest (right). For a normal contrast scene (not dominated by highlights or shadows), switch between video levels and full range and observe the histogram. If it’s completely pegged at both ends, you’ve got the wrong 32 bit mode.What does the 95% cover:
DSLR or digital camcorder originated video, edited and played on a computer or mobile device or a flat panel tv or a digital projector (directly or via youtube/vimeo
What the remaining 5% might cover:
HDR video (likely full range, with ACES RRT sRGB, but again, check the histogram); raw video (Red, CineAlta and the like); analog tape, CRTs and film.Al
P.S. If you hear you should not go somewhere, or concern yourself with something, that’s your hint exactly where to try to go and what to want to investigate further. 🙂
Reply to this Discussion! Login or Sign Up