-
rendering settings in Vegas for 30p video for BD authoring
Dave Haynie replied 15 years, 3 months ago 4 Members · 11 Replies
-
Dave Haynie
January 25, 2011 at 10:39 pmWell, there’s two things at work here. You have an uncompressed 600GB AVI from something. But I don’t think that was your original video. What you capture on camera is the best it’ll ever get, in some sense. Yes, you can use post-processing magic to fix problems, but you’re paying a price for anything but small tweaks.. could be paid in resolution (de-noising algorithms all damage resolution), could be in color resolution or accuracy. Not a problem, those.. the goal is the best final product, not the most accurate or natural, in nearly every case.
But you can’t add information. Taking a 25Mb/s HDV video to a gigantic uncompressed video does absolutely nothing for you.. it looks exactly the same. You can stave off the effects of repeated editing, however. If you got to a 4:4:4 or 4:2:2 format directly frmo your camcorder video, particularly something that’s not lossy, or well proven for repeated encodings like Cineform, you can improve the final product. But what you’re doing there is eliminating loss, not adding something you didn’t originally capture.
The MPEG algorithms use a bunch of cool magic tricks to toss out stuff we don’t care about, and exploit redundancy in video. Most do color subsampling. A full RGB capture has 24-bits for every pixel. But each human eye has about six million color-sensing receptors and 120 million luma sensing receptors. We care about color, but nowhere near as much as we care about luma. So most camcorders record in 4:2:0 or 4:1:1 subsampling… in short, they take every luma sample, but toss out three out of every four color samples.
Then there’s the MPEG algorithms themselves. If you take a photo and blur it just a little, you may not notice the change, and yet, you have reduced the information content. There’s a mathematical operation, fully reversible and lossless (in pure math, anyway), called a Fourier transform. Any finite sample of pixel data, audio data, etc. can be represented precisely as the map of frequencies being used. MPEG uses a function called a discrete cosine transform to represent every point in a 2 dimensional matrix (eg, a photo or video frame) in terms of frequency.
One you have a frequency matrix, you can very intelligently filter just the high frequency stuff. That’s the lossy part of MPEG. It’s going to eliminate the parts the eye will not notice as readily… obviously, too much filtering (too much compression), and it starts to fall apart. This is the same thing JPEG and DV do. Regular DV25 is a about a 5:1 compression.
The other thing that happens is interframe compression. The algorithm breaks up video into “groups of pictures”, or GOPs. In most (but not all) video, each frame is pretty similar to the one before or after it. So in each group, the first frame is called an I-Frame.. just that JPEG-type compression. The remainder of the frames (an MPEG-2 GOP is often 15 frames.. in AVC, it can be hundreds) are not full frames. There are various algorithms that compute just the differences between each frame and the next (and sometimes, the previous as well). This is why MPEG compression is so much smaller, but still looks very much the same.
-Dave
Reply to this Discussion! Login or Sign Up