Forum Replies Created
-
When using a bunch of lossless RGB 4:4:4 images in a sequence with DV25 video, you have several options when going to MPEG-2 for DVD.
The easiest and most straight-forward way with the smallest amount of grief is as Kevin mentioned; stick with your native DV25 codec for the timeline. In more cases than not, this is the best answer.
An alternative way that will give you the best quality (which wont allow you to view this on your NTSC video monitor in real-time), is to make the sequence Uncompressed 10-bit 4:2:2 at 720×480. Yes, I said 480 and not 486. Bring your native DV25 footage and TGA stills into the timeline. Everything, including the video will have to be rendered (or rather, transcoded to 10-bit).
Some (NTSC) tech notes…
DV25 is 4:1:1 Y’CrCb
UC 10-bit is 4:2:2 Y’CrCb
TGA is 4:4:4 RGB
MPEG-2 is 4:2:0 Y’CrCbGoing from DV25 is 4:1:1 to 4:2:0 resulting in 4:1:0 (!!!)
Going from TGA is 4:4:4 to 4:2:0 resulting in 4:2:0 (slightly better maintained color)So here’s the big advantage. Your DV25 footage will not really lose too much quality going to UC 10-bit 4:2:2 (nor gain any). So by working in a 4:2:2 timeline, your DV25 footage and your TGA stills will all be in 4:2:2 space (although your DV footage will still technically be in 4:1:1… it cannot “gain” color space, just not lose any, which is also our goal for the TGA stills). Then when you go to MPEG-2 at 4:2:0, your stills will end up at 4:2:0 instead of 4:1:0 like the DV25 stuff will (and will continue to be in the end… again, the advantage is for the stills).
So this is best when (A) you have a ton of stills and (B) your timelines are somewhat small <30 minutes or you have enough HDD space and time to render all this out.
So if this is the way you want to go, the best solution is to first work in native DV25 format so you get real-time everything. Then when you're happy with the *completed* edit, change (or copy) the sequence as UC 10-bit at 720x480, again 480! The reason for staying in 480 is because MPEG-2 for DVD is 480 and we don't want to screw up frame sizing and mess things up (especially relative positions on stills). Note that this is NOT a standard production or delivery format (that is, 10-bit UC at 480) but that you're using it as a transcoding tool of sorts for maximum quality. Again, you'll need to render *everything* and encode out to MPEG-2 but you'll get a slight quality bump than if you had just outputted straight from DV25 to MPEG-2.
In the end, your MPEG-2 file for DVD will have slightly better color information for those stills and with MPEG-2 running only at 4:2:0 (that's low), we need all the extra color space we can use and this technique gives us that. If you are NOT running any stills or only a small handfull of stills in a long TRT, then I would strongly discourage this technique and stick with the straight DV25 to MPEG-2 route.
Hmmm, maybe I should turn this into a small how-to article here on the Cow. I guess it can get a little complicated if this is something new to anyone. Hopefully it makes some sense though! Let me know if there's any interest in an article or if there's any questions from the above.
Cheers,
Marco Solorio | OneRiver Media
-
If you’re working with standard 60i footage (29.97 FPS, interlaced), then *keep* everything interlaced. Only work with the progressive format if your source material is 24p, 25p or 30p. Even if you change it to progressive while using 60i source material, it will not convert the source to 30p (you’d need the Nattress filters for that to give good results). It’ll only change (I’m assuming, because this is the case with AE) any transforms (i.e., zooms, rotations, positions, etc.) which will be rendered as progressive instead of interlaced, which would look weird against 60i material.
Marco Solorio | OneRiver Media
-
It can be both hardware and software as was the case with Cinewave. But it can be performed solely on software and CPU power alone. Other software-only apps can already do this. I think Avid with a Mojo can do it. I’m not sure if Vegas can or not. Probably. But as I mentioned, this was shown to us as NAB 2003 from Apple. It left as quickly as it arrived. Where did it go? I think we need to ask the Oz.
Marco Solorio | OneRiver Media
-
Well it’s just that PCI Express is faster than both PCI-X and AGP. So BMD wants to use the extra bandwidth to do more things. Mixed codecs on the same timeline (MCOTST) (!) isn’t really dependent on PCI-E or any other PCI/AGP variant since this can technically be done through RT Extreme via CPU processing, which we saw at NAB 2003 [GRRRRRRR].
Although Windows machines have already had PCI-E for a while now [sigh] and BMD developed the new MultiBridge with PCI-E in mind, it’s a good sign that the next G5s will finally have PCI-E as well but only a guess. I have to believe the last batch of G5s that came out a few days ago will be the last rev before we see a refreshingly new G5 with [hopefully/crossing fingers] dual-core CPUs and PCI-Extreme. My dream would be QUAD dual-core G5s with 6 PCI-E slots!… now THAT would be hot! Both figuratively and literally!
Marco Solorio | OneRiver Media
-
I completely agree with Graeme (when don’t I?). FCP isn’t going to really see any gains right now, but I have to believe it will in due time. It makes a lot of sense for it to use help from the GPU.
And to add to Graeme, the available display cards for our G5s are currently CRAP. Sorry to be so blunt, but in my other clone-life, I do 3D animation on Windows where 3D cards are like the Space Shuttle and the 9800 and X800 are like rubber-band propellor airplanes. For our G5’s to really get into the serious OpenGL ballgame, we need to (1) get PCI-Express and (2) have nVidia/ATI/3DLabs write drivers for their top-end PCI-E cards for OSX. HP currently offers a workstation that has dual PCI-E slots for displays. This means you could tie two nVidia FX4400 cards together (there’s a bridge ribbon that clones them together internally). This would make any power-hungry geek fall in love.
So yes, by going by those standards, our current single-slot AGP configuration is, well, totally weak.
=(
Marco Solorio | OneRiver Media
-
If you want the best, get Ultimatte. Pricey and slow rendering, but ohhhh so powerful. I’ve tried many and I still prefer it after all these years. I’ve literally gotten better keys with Ultimatte using crappy footage versus something like DV Matte Pro with better source footage. And don’t get me wrong, less expensive tools like DV Matte Pro are still very good (*much* better than FCP’s keying tools), but Ultimatte is supreme.
As Napoleon wisely said, “It’s…. INCREDIBLE!”
Marco Solorio | OneRiver Media
-
Marco Solorio
April 30, 2005 at 7:20 pm in reply to: Mixing Resolutions on the same time line. final cut pro[walter biscardi] “What confuses me is how was CineWave able to offer this feature well over a year ago and no one else is offering it today?”
Because when Cinewave was invented, there was no such thing as RT Extreme which allowed the host CPU(s) to perform real-time effects as efficient as we have them today. Because of this, the latest batch of capture cards do not have on-board processing for RT effects. This would not only be a waste of development/production money to build into the card, but us as end-users we would be paying for the processing power we wouldn’t need nor use. At this point, everyone is at the mercy of Apple to implement these kinds of RT features. If it’s not part of the RT Extreme structure, our capture cards wont be able to perform it.
However, it appears BMD has made a partial work-around for this to happen. According to their latest press release, using 8 and 10-bit media in the same timeline looks like it may happen. This, IMO, is much easier to pull off than two completely different formats with different frame sizes, a’la DV25 and 10-bit UC. Nonetheless, it’ll be interesting to see what develops.
So in essence, this is something we need to scream at Apple about, not the capture card developers. After all, Apple DID show this exact functionality without any hardware support back at 2003 NAB. Why it’s over 2 years overdue from Apple is the real question.
Marco Solorio | OneRiver Media
-
[bill] ” is the guy in tho monitor writing on the ladies rear end”
Yes. He’s autographing her booty. You know how hip-hop videos are! [sigh]
[bill] “is that a plate of sushi on the right?”
Yes, it is sushi. But unfortunately they are wind-up toys! Makes me hungry for sushi everyday. I’m just teasing myself!
Marco Solorio | OneRiver Media
-
I have a Tascam DM-24 automated mixer for the main suite. Pix here. It has a couple of slots for expandability. One slot is for 8-channel analog IO and the other slot will be used for the available FireWire option. This, according to both Apple and Tascam will support the Mackie protocol so the faders can work as a control surface with FCP. This is a very, very nice mixer. Besides your typical EQ control on every channel, it also has dynamics on every channel (compression, gate, etc.) as well as surround mixing. You can save and recall custom mixes or settings… no more writing down mix settings; every knob and button is recalled. I’ve been using this as the main edit suite mixer for over a year now and it’s been wonderful. And the clients dig the moving faders! =)
Marco Solorio | OneRiver Media
-
[Jeff Carpenter] “If you haven’t noticed, Pentium chips have been slowly creeping ahead too. It’s not the design of the chips themselves that are the problem, it’s the ability to actually make them that’s holding everything up.”
Yes, I totally understand this and have been seeing the same thing. However, Intel and AMD seem to have a better grasp on dual-core than IBM.
So yes, until a better method of building these things comes along, we’re not going to see Moore’s law in action any time soon, as has been the case for the last year.
At the very least, I’d *LOVE* to see quad G5 Macs with an even more advanced cooling system. Hell, I still have a $10k Daystar Genesis MP 800+ sitting at home under a desk with nothing hooked up to it… the only quad-processor Mac ever made with a depreciation value faster than **** dropping in a toilet. If they built quad Macs in the 90’s, surely they can do it today with a true multi-threading UNIX OS.
OR
Apple should do it like SGI and have the ability to stack workstations acting as one complete system. The OS doesn’t know there’s 8 computers locked together even though 16 CPUs are crunching through data like poppy seeds.
So… if we can’t have faster CPUs, then at least tie more CPUs together in one box OR build G5s who’s motherboards can be tied together to act as one system.
Simple! 😉
Marco Solorio | OneRiver Media