Activity › Forums › Apple Final Cut Pro › Changing optimized Media settings
-
Changing optimized Media settings
Posted by Sascha Engel on September 21, 2013 at 4:29 pmHi Everybody,
Following Q: When I check the Optimize Media checkbox and import, it creates ProRes Files.
Can I change the settings, e.g. to optimizing Media to ProRes LT or HQ?
If yes, where do I do that?
When I edit Alexa or RED Footage, and HQ is better or I have a Prosumer Camera where LT is enough.Greetings,
Sascha Engel
TIME BANDITZ Productions
http://www.youtube.com/taikangJeremy Garchow replied 12 years, 1 month ago 11 Members · 65 Replies -
65 Replies
-
Sascha Engel
September 21, 2013 at 6:51 pmWell, I guess, that’s then definitely one of those things, that still do not make it a pro app. Not having the option to choose the transcoding codec is really poor.
Hope, they get their shit together soon at apple, otherwise it really should be called FCX not FCPX.Sascha Engel
TIME BANDITZ Productions
http://www.youtube.com/taikang -
Bill Davis
September 21, 2013 at 7:37 pm[Sascha Engel] “Well, I guess, that’s then definitely one of those things, that still do not make it a pro app. Not having the option to choose the transcoding codec is really poor.
Hope, they get their shit together soon at apple, otherwise it really should be called FCX not FCPX.Sascha Engel
TIME BANDITZ Productions”Sasha,
What you’re trying to do is NOT how X works. I understand the “classic” idea of being able to “optimize” to a different codec before editing, but, really that presumes that the best results come from remaining in ONE format across your workflow. but actually, the way X works, it’s simply not necessary.
The reason it uses ProRez internally is that X is optimized to do it’s INTERNAL metadata driven display and manipulation efficiently in ProRes. But it’s just DISPLAY. It’s NOT really transcoding your fundamental files. Those remain untouched while you work in ProREs for convenience. You don’t really need to transcode until you’re ready to MASTER in your “finishing” resolution.. There’s no functional reason to do it prior to that stage.
If you’re worried about MONITORING, then, for example if you’re using down-rezzed Proxies, X will let you “park” on a frame and render it out so you can see the underlying quality in play. But honestly, after a while working with it, I don’t even mess with that much anymore. I know that the 64bit math and the new AV Foundation structure is built to preserve all the quality I can throw at it. SO if I have excellent original files -You can be totally confident that you’ll get superb export files as a result.
So it’s an “anything comes in” – THEN – Build in X in ProRes (or Proxy if you have complex files and want more efficiency) and add all your edit and aesthetic decisions via metadata pointers – THEN – just let X create your output files by referencing all the quality of the original source files.
The MIDDLE files just let you work with the editing data. So there’s no penalty for having the “middle” work done in ProRes for efficiency –
Those of us who edit in X are used to switching back and forth (and sometimes having the program switch automatically such as when it’s working with it’s internal initial rapid thumbnail videos to let you start editing INSTANTLY) – then substituting transcoded files as they get finished. Swapping from working in Proxy or Original is a fast click – X just changes where it’s internal pointers point.
X will substitute whatever resolution you need to create excellent output masters depending on your referenced original media and the target media type you want to create. So you don’t HAVE to try to constantly flow high rez files across the entire workflow. Hope that makes it clearer.
Know someone who teaches video editing in elementary school, high school or college? Tell them to check out http://www.StartEditingNow.com – video editing curriculum complete with licensed practice content.
-
Sascha Engel
September 21, 2013 at 9:01 pmThanx Bill for the interesting elaboration, but somehow I still like the idea that I can chose.
If I have a roundtrip with DaVinci, and I export an XML of the final edit, so I can work on it n resolve,
I don’t want that it points to the original files, but to the highest possible editing codec, like ProRes 444.
I’m happy with a lot of things in X, but I think, choosing your transcoding codec, exporting AAF and OMF, opening former FCP legacy docs are features that still have to come natively into the app.
I happen to agree with Mr. W. Murch, that it’s stuff, that just has to be part of a pro app.And I’m very optimistic, that hose things still will come.
Maybe in FCPX 11?Sascha Engel
TIME BANDITZ Productions
http://www.youtube.com/taikang -
Jeff Kirkland
September 21, 2013 at 9:16 pmHi Bill,
Just trying to get my head around your post because it’s something I hadn’t considered before. Are you saying that on export, FCPX always uses the original media rather than the optimised? How does that relate to my ability to set FCPX to background render a project in other flavours of ProRes or even uncompressed?
Jeff Kirkland | Video Producer | Southern Creative Media | Melbourne Australia
http://www.southerncreative.com.au | G+: https://gplus.to/jeffkirkland | Twitter: @jeffkirkland -
Bill Davis
September 21, 2013 at 9:37 pm[Jeff Kirkland] “Just trying to get my head around your post because it’s something I hadn’t considered before. Are you saying that on export, FCPX always uses the original media rather than the optimised? How does that relate to my ability to set FCPX to background render a project in other flavours of ProRes or even uncompressed?
“IF after all this time, I understand things correctly, the reason my friends who are smarter about this stuff than I were so adamant about getting me to understand how X is “referential” and “metadata based” – the REASON that is important is that the program NEVER tries to change the underlying nature of anything it’s processing. It just creates a virtual copy of the “change lists” of editors choices – tied to the locations of the source files. Now, if the source files are something like ProRes that the program “gets” natively, then ALL the program does is stick those originals in a stable location and everything from that point on just points to them – and the program “filters” the appearance of the stream based on the originals plus the metadata, calculating and storing the changed states as render files and other temporary “speed ups” it goes, but NOT necessarily calculating the final state until you get to the Share stage.
That’s why it so elegantly swaps proxy to original media. All that changing is just the source pointers.
When you go to SHARE, what I understands happens is that X says “ah, FINALLY the editor wants to express a final, so let’s calculate that. If proper files are in the right pace, it’s fast. But if not. (If the editor specifies a higher rez output than is curently stored, X puts ALL its processor resources to work creating a NEW share master using the best level of source footage it knows how to find.
So you’re NOT necessarily building ONE master as you edit in X. Your creating the instructions for building whatever masters you might need on Share. But this is disconnected from the actual editing, because once the editing metadata instructions are complete, it’s no big deal to just swap the pointers to high, medium or lower rez virtual copies to make a variety of outputs as needed.
So you’re actually NEVER working on “a singlular master stream file” in X. You’re just building a metadata structure that points to originals (and other calculated transcodes) that you can use for outputting all manner of various types of masters.
That’s why I doesn’t make sense to me to try and jump through hoops to maintain a SINGLE resolution working throughout the whole process in X. It’s a totally unnecessary hassle.
That’s how I understand it anyway.
FWIW.
Know someone who teaches video editing in elementary school, high school or college? Tell them to check out http://www.StartEditingNow.com – video editing curriculum complete with licensed practice content.
-
Sascha Engel
September 21, 2013 at 9:44 pmSo, what does hat practically mean, when I export an XML of my final edit for resolve?
What files resolve will choose hen to grade based on that XML?Sascha Engel
TIME BANDITZ Productions
http://www.youtube.com/taikang -
Bill Davis
September 21, 2013 at 10:40 pm[Sascha Engel] “So, what does hat practically mean, when I export an XML of my final edit for resolve?
What files resolve will choose hen to grade based on that XML?”As with ANY XML, it’s just a big file of text. It REFERS to whatever footage bundle you send along with it.
Send the XML out of X that you’ve edited via ProRes – but send your colorist copies of the ORIGINAL camera files – That’s the point. The camera data and the XML that REFER to the source footage are two distinct things.
If you’re creating an iPhone video as the final deliverable sending 4k RED files is nuts. Just send the colorist Prorez (or RAW for lattitude as we migrate to that)
If you’re going to theatrical, then send the colorist your X-XML files (and maybe some simple X proxy files for visual reference – and let the colorist, SWAP the pointer references from proxy to R2D files on a hard drive to do their work.
A “clip” no longer is no longer a single, monolitic thing anymore than a “hard drive” or a “camera card” is necessarily a single thing anymore.
We can now use sparse bundles and other “drive vvirtualizaton to make multiple clones of drives and cards precisely because you can use ANY of them to instantly re-create the source disk in multiple places for multiple editors. The database in X sees the deep ID code and if it matches, it can use the file exactly like the original – whether that’s the original or the 10th generaton clone. And because its a file X recognizes – and because X natively knows how to swap “a source clip” for copies in many resolutions, you don’t HAVE to edit in what you deliver in. You can swap things in and out as needed.
That’s why X can change from Original Media to ProRes to Proxy nearly instantly. It understands that they’re ALL just versions of the same clip that populates THAT part of your project. PLUS anything you applied to a clip on your storyline or in the EB (a cut, clip order or trimming, a re-size, a tint, whatever ), is also sitting expressed in the database as more metadata ready to to overlay instantly on whatever version you swap out underneath.
X is doing kinda the same thing with metadata virtualizations of your editing decisions. everything is just a big pile of ever changing text that refers to other stuff. Legacy was too, essentially, but it had to sit next to a pretty dumb singular capture scratch and had similarly dumb plumbing between the timeline and the project file.That’s what they had to clean out and re-create in the move from Legacy to X.
X is content in a wold of virtual connections and is built to rapidly attach to anything it sees and recognizes as potential material. It does NOT get confused if there’s more than one “expression” (alternate rendering) of a clip that you want it to use. and eventually, I suspect that “multiple clone” workflows will develop to make X a killer collaborative tool – and that it’s re-imagined XML is a part of that vision – but we’ll have to wait and see how that sorts out. That’s fun speculation WAY beyond my actual technical expertise.
I’ll just say that the more I get to know it, the more SENSE it makes. And I can see why it works the way it does.
I’m sure others here much smarter than me will correct me if I’ve got any of this substantially wrong, it’s my self-analysis of what I see X do and how it does it.
But the operational stuff I feel pretty solid about, if at the same time, the deep under the hood complexities remain well beyond my understanding.
FWIW.
Know someone who teaches video editing in elementary school, high school or college? Tell them to check out http://www.StartEditingNow.com – video editing curriculum complete with licensed practice content.
-
Jeremy Garchow
September 22, 2013 at 12:26 am[Sascha Engel] “I don’t want that it points to the original files, but to the highest possible editing codec, like ProRes 444.”
I think this all depends on your workflow.
In my opinion, there’s no reason to go, just for example, DSLR h264 to fcpx ProRes 444, out of DaVinci, to ProRes 4444. You could skip a step and go h264 to 4444 out of Resolve.
With Alexa files, they are most likely 4444, if you need lower res, fcpx can create ProRes proxy.
With red material, you can easily use RCX Pro to make any proxy files, and then also connect back to R3D for grade.
Most MXF card based cameras get rewrapped to .mov in their native codec which preserves original quality instead of transcoding. Some codecs in fcpx won’t even optimize to ProRes as Apple says there’ll be no benefit.
I agree with you, Sascha, that we should have a choice. ProRes LT would be particularly handy as a nice balance between file size and quality (sometimes ProRes proxy is too heavy on the proxy).
Final Cut Pro 7 didn’t allow you to transcode every single format to 444 either.
As far as interchange in the application, I wouldn’t expect it. It’s not the direction Apple seems to be going, but FCPXML is getting better and better.
Fcpx has come a long way since Murch deemed it unusable. It was before XML was available in the app.
Jeremy
-
Jeremy Garchow
September 22, 2013 at 12:33 am[Sascha Engel] “So, what does hat practically mean, when I export an XML of my final edit for resolve?
What files resolve will choose hen to grade based on that XML?”Again, depends on workflow and how you imported the clips.
Most likely, it will be the rewrapped .movs from MXF or avchd material, or the optimized media in the case of h264 material.
If you imported R3D to X, you can relink to the raw.
Alexa files will used natively (if shot ProRes).
Jeremy
Reply to this Discussion! Login or Sign Up