Activity › Forums › Adobe Premiere Pro › CS5 Premier Pro Export to MPEG2
-
Tim Kolb
November 30, 2010 at 5:35 amIt depends what camera did the shooting I suppose, as the true sensor resolution is all over the place. A Varicam 27F Only had 1280×720 sensors and it shot 720p only…but that has little to do with the resolution of the saved file. DVCProHD in its 720p form only stores 960×720.
If you decided to shoot with an HVX200, you’d be shooting with a camera with 960×540 sensors, using co-sited sampling the camera creates a full frame…1920×1080 or 1280×720 as you would set it up…then subsamples it again to 960×720 or 1280/1440×1080 and writes the file.
In HDV you either have full raster 1280×720 square pixel or you have interlaced 1440×1080 with non-square pixels. DVCProHD doesn’t really have “non-square pixels” so much as a stored frame that has 75% of the horizontal resolution it needs and it depends on the decode process to interpolate the missing values.
The new Canons you refer to shoot XDcamHD 422…which is a full frame or “full raster” format unlike HDV in its 1080 form.
TimK,
Director, Consultant
Kolb Productions, -
Bob Dix
November 30, 2010 at 7:25 amThanks Tim,
Not that much between the lot ?
Freelance Imaging & Video
AUSTRALIA -
Tim Kolb
November 30, 2010 at 7:36 am[Bob Dix] “Not that much between the lot ?”
Hmmm… My Midwestern USA to Australian dictionary isn’t helping me here…
🙂
Are you asking if there isn’t a lot of differences between cameras?
TimK,
Director, Consultant
Kolb Productions, -
Bob Dix
November 30, 2010 at 10:56 amSorry Tim,
See this quote
“The formats: HDV (High Definition Video) and AVCHD (Advanced Video Codec High Definition)
There are two distinct formats for consumer level high definition camcorders: HDV (High Definition Video) and AVCHD (Advanced Video Codec High Definition).HDV
HDV uses the DVD-like MPEG2 compression to squeeze high definition video onto MiniDV tape used in standard definition camcorders, along with compressed stereo sound.HDV, however, captures a picture that has a resolution of 1440 x 1080, which is a bit short of the 1920 x 1080 resolution that is the real-deal high definition (not that this stops one camera company from calling their HDV ‘True-HD’, even though the video signal has only 1440 picture elements). This is because to record full resolution 1920 x 1080 on Mini DV tape would exceed the mini DV tape’s capabilities. To get around this, the 1080i HDV format uses rectangular pixels, rather than uniformly square ones, that when replayed, fill a standard
1920 x 1080 frame.This doesn’t make HDV inferior; in fact it’s in good company. The broadcast formats XDCAM HD, DVCPRO HD and HDCAM all have the same or lower resolution as HDV.
There are two main problems with HDV, though. First, tapes wear out. The magnetic material eventually wears away under repeated playing and recording. When the wear becomes significant, your video is gone. Second, getting to the place that you want is slow with tape. It has to be wound forwards and backwards, just like the VCR tape of old.
On the up side, HDV has the advantage of wide support in computer software for editing, and the tapes themselves are cheap, and each can fit 60 minutes of high definition recording.”This is an Australian explanation of what I was alluding to.
Basically, how does a camera that captures at 1920 x 1080p and opens up as 1440 x 1080i in Premiere Pro is edited and Exported to Tape expand to 1920 x 1080i Ie., Anamorphicly to display correctly on a High Definition Monitor or TV in the correct 16 :9 Format ?
That is why I said “Not a lot of difference between the lot” of high definition cameras.” As per the pixels , disregarding lens quality etc., ? They seem to do get the same result only differently.
Freelance Imaging & Video
AUSTRALIAPs. My friends in Bar Harbor USA have the same problem with me ?
-
Jeff Pulera
November 30, 2010 at 2:40 pmHi Tim,
I use a formula I got from the Adobe website a while back, which figures the bitrate as 560/minutes, which in your case would be 560/80=7 and I always round down a bit to make sure there is plenty of room for menus and such.
I believe you are working with PAL, so maybe this formula would be a little different with the PAL frame rate and frame size? But should be close enough I think.
What data rate were you using?
Jeff Pulera
Safe Harbor -
Tim Thompson
November 30, 2010 at 3:32 pmI’m not using PAL. NTSC is what I am exporting for. Thanks for the info on the formula. I will be applying all the suggestions later today. I’ll keep you posted.
However, no one has yet recommended another Codec other than trying to use what Adobe has to offer. Any other codecs? Thoughts?
-
Tim Thompson
November 30, 2010 at 3:36 pmHey TimK: I am awaiting your reply to the following.
Thanks for the reply.
1) The video was shot DVCProHD.
2) I can try your suggestions. However, have you ever deinterlaced footage as you suggested and what were the results?
BTW: Are you in Minnesota. I’m in Tn but raised in Minnesota. Northern Minnesota.
-
Tim Kolb
November 30, 2010 at 4:04 pm[Bob Dix] “That is why I said “Not a lot of difference between the lot” of high definition cameras.” As per the pixels , disregarding lens quality etc., ? They seem to do get the same result only differently.”
Hmmm… You’ve actually come up on the biggest source of debate in the industry since HD video started to take hold…”Which format/camera is better and why?”
First, some mitigating factors that affect the landscape:
Serial Digital Interface (SDI-SD or HD) only transmits square pixels. The typical way video comes through SDI is “baseband” (really an old analog term), or more specifically: full frame raster, uncompressed.
Analog monitors and video signals don’t have pixels at all, they have scan lines. Horizontal resolution on the absolute best broadcast grade CRT may…and I say MAY…reach 1200 absolute maximum “horizontal lines” perceivable resolution…and that’s downhill, with the wind, on every second Thursday, and it would HAVE to be a 20″ absolute top-of-the-line CRT quite likely to get a number even close to that. (I’ve seen some high grade monitors that claim 1000 lines in a CRT smaller than 20″ that claim 1000 horizontal lines, but that would be maximum.) The 14″ Sony High grade HD CRTs that are still around here and there and are so prized can typically claim about 800 lines (representing 1920 digital pixels in the case of a signal that is 1920×1080).
So we have some history to address here…
At the time HDV came out, it filled a massive gap in capability. The cost reduction had to come from somewhere to make it affordable, so it’s aggressively compressed, MPEG2…like the quote you referenced says. It’s also a sub-raster codec, period. By this I mean that HDV doesn’t recreate a 1920×1080 signal on decode during standard operation…it simply decodes a 1440×1080 non-square pixel raster. This is why Canon’s XH1 was such a big deal when it came out as an HDV camcorder that had HDSDI-out hadn’t existed yet. Why? Because 1440×1080 isn’t square pixel when decoded to “baseband” and the HDV2 (1080) camcorders that had come before it didn’t have any stage in the analog process where the image ever WAS 1920×1080. Canon had to first have a camera head that would create a 1920×1080 signal prior to applying compression and feed the HDSDI out from that stage in the camcorder, regardless of what the resolution of the camera’s sensor is…or they had to provide some sort of “upsample” function so that decoded 1440×1080 HDV would go out as uncompressed 1920×1080 so SDI knew what to do with it on the other end of the cable. This is no longer all that rare, but at the time it was a big deal.
HDcam was developed by Sony to fit into the bandwidth capabilities of the old SDTI serial standard so that most facilities that had standard definition infrastructure could handle the signal as-is. HDcam is different than DVCProHD and XDcam HD (the Blue-Ray disc based format…not the “EX” HQ format) in that no independent software decoder exists for an NLE system to handle the aggressively subsampled and compressed 135 Mbit, 3:1:1 data set that resided on tape. Anyone who used HDcam used HDSDI infrastructure and the signal was fully decoded and interpolated to 1920×1080 4:2:2 in those cases.
Apple was the first to work with DVCProHD in its “stored” form I believe. In order to do this, the video had to enter the computer in some other way than HDSDI, as the pixels had to be decoded and used as “non-square” even though the format wasn’t really originally intended to ever be seen in that “state” when it was developed. It was originally intended to add back interpolated resolution for “baseband” full raster, square pixel playback over HDSDI in the original thinking of Panasonic I’d bet, just as Sony never intended HDcam to ever be manipulated in it’s highly compressed, stored state. DVCProHD VTRs with FireWire ports were the answer. When you couple a Mac with FCP to a DVCProHD VTR via FW, you are transferring that compressed video AS DATA…not as video. It’s now a file transfer.
…just as HDV does over FW into any editing system. It’s a data transfer. It’s why we now use the term “ingest” instead of us old fogeys who still remember “digitizing” (capturing analog into the computer and executing Avid or Media 100 specific compression “on the fly”) or in the case of digital video, we used to say “capture”.
I suspect that XDcamHD only followed in the “data transfer” workflow steps of DVCPHD because of market demand. I suspect Sony would have kept compression “behind the curtain” of decode for HDSDI if they could have. XDcamHD on optical blue ray disc is a somewhat different animal that XDCamHDEX. The file wrappers are different (MXF on disc and MP4 on SxS cards for EX…both actual video files are MPEG2 Long GOP, like HDV), and the highest data rate in XDcamHD (NOT the 422 designation) is 35 Mbits for both, but XDcamHD on blueray only writes a 1440×1080 frame, whereas the EX cameras like my EX1 write 1920×1080 full frame file. When either format is dumped into a computer, the computer simply uses the correct decoder to decode the proper resolution and pixel shape.
AVCHD and AVC Intra are full frame, 1920×1080 (and 1280×720), but use MPEG4 compression. AVCHD is an aggressively compressed, long GOP format meant primarily for consumer and I suppose one would say “sub-broadcast” professional use (even though I’m sure somebody is using somewhere for broadcast…) and AVC Intra is an “I frame” format of much higher data rate intended for professional use. Each of these codecs started life when decoding digital video material natively inside the computer was standard, so the codecs moved to our editing systems relatively quickly.
So…add different camera front ends into the picture. If you didn’t have a mess before, you have one now.
My 6,000 USD EX1 records a 1920×1080 full raster progressive frame at 35 Mbits/s onto an SxS card…where as a “broadcast-intended” XDcamHD optical disc based camcorder with a lens that costs quite easily three times the price of my whole camera, not to mention the camcorder body itself, records 1440×1080 at 35 Mbits/s to disc…
Which is better? The glass on the more expensive camcorder, along with it’s more expensive components, etc. should yield a sharper, cleaner image …but it gets subsampled before recording. My EX1 has a fixed lens and is, by most professional cost/return standards of the last ten years, a “disposable” camcorder, but it records a full raster.
That is the ongoing argument, and it’s only been intensified by the emergence of the RED Camera, which records 4K images at a small fraction of the cost of cameras that were creating stunning 2K images…
The debate goes on whether a camera like the HVX200 with its 960×540 sensors really qualifies as an “HD” camera…whether a DSLR that “line skips” to create an HD video stream from a sensor that has much higher resolution is really “good”…and whether the massive jump in pixel count generated by a RED in the “decoded file” adds any value over say an ARRI Alexa, which will be used to generate a ton of material at HD resolution as ProRes, I’m sure… Although, with how inexpensive a RED camera is, whether the camera adds value to the image itself really isn’t as much the question as whether the extra headaches that post production workflows have to tackle to store and convey that many more pixels are worth the hassle for the end viewer…
3 sensor cameras like an F23, or a Panasonic Varicam 3700 vs Bayer single sensor cameras like the RED, SI2K, F35, and the ARRI D20 (the Alexa is a bit of a special case) is another debate that further clouds the issue.
In the end, I think that debating the “science” of all these factors has distracted us from improving the image “aesthetic” we should be focused on. Which camera/format is better? Well, I’d take an experienced and incredibly versed visually driven DP with an HVX200 over a university student who knows enough to be dangerous wielding a RED any day.
How MANY pixels you generate doesn’t say much about your skills as someone who creates images in my mind.
TimK,
Director, Consultant
Kolb Productions, -
Tim Kolb
November 30, 2010 at 5:07 pmDeinterlacing…
I’ve seen it look fine and seen it look awful. I’d try a short segment with just the “max render quality” box checked without changing anything else first.
If that doesn’t deliver the results you seek, I’d say keep deinterlacing as an option, maybe trying it on a short segment first on a duplicated sequence before you commit to it.
Minnesota…no, I’m from (and in) Wisconsin. Minnesota is a nice place though. They pay the NFL quarterbacks we put out to pasture one heck of a pension…
🙂
TimK,
Director, Consultant
Kolb Productions, -
John Frey
November 30, 2010 at 5:47 pmTim, that was an excellent post re.HD cameras. More videographers should read it. Thanks for the input.
John D. Frey
25 Year owner/operator of two California-based production studios.Digital West Video Productions of San Luis Obispo and Inland Images of Lake Elsinore
Reply to this Discussion! Login or Sign Up