-
Sony Vegas Pro 12 and going between T3i and 5D Mark III
Posted by Michael Gibrall on September 20, 2013 at 12:30 amHello all.
When rendering a film that has footage from both a Canon 5D Mark III and a Canon T3i, I find it funny that it takes longer to render the T3i footage.
Anyone have any idea why this would be the case?
Thanks in advance.
Dave Haynie replied 12 years, 8 months ago 2 Members · 1 Reply -
1 Reply
-
Dave Haynie
September 20, 2013 at 2:57 pmDepends on the format used for recording.
The Canon 5D Mk III, like my 6D, has the option of either IPB or “All-I” recording. All-I is the same thing as AVC-Intra, records at about 100Mb/s. This decodes much faster, since each frame is entirely independent of the next. However, if you use IPB mode, you’ll get the usual ~40Mb/s video you expect from a Canon, but it’ll take longer to decode than video from your T3i.
The T3i uses IP (sometimes written IPP) encoding, AVC, at about 40Mb/s. This will take longer to decode than AVC-Intra.
So here’s the story.. this is simplified, and covers pretty much any MPEG family video CODEC. Back in the dark old days of DV, you just had the DV CODEC. This was, itself, a formalization of “Motion JPEG”. Basically, each frame was encoded as a JPEG image, with some additional special sauce for dealing with interlacing in an effective way. You can imagine shooting 24 JPEGs per second with either Canon, and you’ll get the idea.
HDV moved to MPEG-2, which uses both interframe and intraframe compression. It specifies an I-Frame, which means “Independent” frame, which is pretty similar that original JPEG. That’s compression within the frame; it was a 5:1 compression for regular DV25; MPEG-2 in HDV goes a bit stronger. But consider: DV was 25Mb/s. DVD, which looks just as good, runs at around 6-8Mb/s typical. That’s where the intraframe compression goes.
So after than I-Frame, let’s do something different. Since it’s pretty common for most of one frame to be in the next frame, let’s figure out the difference between the next frame and the one you just encoded as an I-Frame. Using a motion search algorithm, I can produce a set of vectors, which tell me pretty accurately where each chunk of one frame went, in the next frame. So then, apply those vectors to the first image, subtract the second image, and you get a very strange “difference” image, which is basically just the error between the algorithms used to find and apply the vectors, and the actual image. So we store those vectors and that difference image, which is highly compressible, into a new frame time, dubbed a “P-Frame”, for prediction. The final kind of frame is a B-Frame… B standard for “bidirectional”, and it’s a frame that can essentially use details from either the previous or the next frame as the basis for compression. The earlier Canons didn’t use B-Frames; DVD and Blu-ray do, as do the newer Canon models.
So, the decoding job when it’s I-Frame only is easy: every frame to itself, just a JPEG decoder. For the P-Frame, you have to take that previous frame, apply the motion vectors, take the error frame, uncompress it, then apply it to the vectored frame. Much more work, thus, a much slower decompression.
And AVC itself has a bunch of extra small details, in both its “JPEG” equivalent for I-Frames, the analysis and application of vectors, etc. which lead to better images, but slower decoding and encoding.
-Dave
Reply to this Discussion! Login or Sign Up