Hey Levan,
I felt more confusion reading the ensuing discussion vs. your straightforward question.
The short answer, good for 95% of cases, is: 32-bit floating point (video levels).
Why not 8 bit:
- Quality wise, much less risk of banding as a result of color manipulation.
- Speed penalty wise, it’s 2017 (but has been true for years): rendering is done via GPU (and even if CPU, those are fast).
Why not 32 bit full range:
Lots of legacy reasons. Many DSLRs and recorders will even have the vids tagged full range in the container, but the data is video levels. How can you check: in Vegas top menu, pick View -> Window -> Video Scopes. Select Histogram. It’ll chart the output brightness levels in the current frame from darkest (left) to brightest (right). For a normal contrast scene (not dominated by highlights or shadows), switch between video levels and full range and observe the histogram. If it’s completely pegged at both ends, you’ve got the wrong 32 bit mode.
What does the 95% cover:
DSLR or digital camcorder originated video, edited and played on a computer or mobile device or a flat panel tv or a digital projector (directly or via youtube/vimeo
What the remaining 5% might cover:
HDR video (likely full range, with ACES RRT sRGB, but again, check the histogram); raw video (Red, CineAlta and the like); analog tape, CRTs and film.
Al
P.S. If you hear you should not go somewhere, or concern yourself with something, that’s your hint exactly where to try to go and what to want to investigate further. 🙂