Activity › Forums › Creative Community Conversations › How many here really dislike audio tracks and the viewer?
-
How many here really dislike audio tracks and the viewer?
Brad Davis replied 14 years, 3 months ago 34 Members · 119 Replies
-
Steve Connor
February 3, 2012 at 11:07 pm[Scott Sheriff]
A lot more than the guy trying to sell me something, that doesn’t actually use it. All the (fanboys) want to do is talk about what Randy, or others that earn a living from selling this and related products have to say. All the people that stand to make a buck from a million noobs jumping on the X bandwagon certainly have a dog in the fight, and can’t be trusted to give and unbiased opinion”Welcome back, really missed your insulting rants here.
Steve Connor
“FCPX Agitator”
Adrenalin Television -
Adam White
February 4, 2012 at 2:29 pmThese are still valid questions, and you know what I really hate most of all about FCPX is how divisive it has become. I genuinely hate to see editors tearing into each other over this piece of software. And I find it really distasteful that talented people are being labelled “out of date” and “stuck in the past” because they want a damn viewer back! I mean, seriously, is that really so much to ask and is it really a sign that they need to be put out to pasture?
The whole notion that Apple knows best, that they’ve seen something lowly editors haven’t yet, and that we should all just shut up and learn to love it – I am still deeply uncomfortable with that. I think some people may very well like this new way of working – but others wont and for very legitimate reasons; it has nothing to do with being too attached to the past or not having any vision.
As for the original question, I would be pleased to see these things re-introduced. Along with removing the magnetic timeline (I know, never going to happen), its the kind of design decision that would allow me to reconsider FCPX as a viable option. Adding features back is not the issue for me personally, and whilst I think its great for people using X that Apple are updating it, for me it makes no real difference to my view of the software because those central design decisions I was always unhappy with are unchanged.
-
David Roth weiss
February 4, 2012 at 7:07 pmBill,
I was going to respond to the two things you wrote that are even remotely worthy of a response. However, as has been pointed out to me by many, the audience here aren’t blind, and they don’t need me to point out the flaws and fabrications in your rhetoric.
-
Bill Davis
February 4, 2012 at 11:41 pm[Michael Hancock] “[Bill Davis] “Since X is a modern work in progress – it’s up to the reader to consider whether as X moves into the future where Red, DPX and ARRIRAW are increasingly relevant – X might adapt really well to those needs.”
Current, smaller data formats – how does it handle P2, AVC-Intra, AVCHD, and XDCam? Native file format, or does it require a rewrap?
“And what does “require a re-wrap” mean in the modern context.
Every NLE has to, at some point, take whatevr’s tossed on it’s timeline and transcode it to whatever you want as a target export steam. NOBODY with a brain wants to put an XDCAM clip next to a go-pro clip and want those two clips output into separate files. So one (or both) is going to have to be transcoded. Period.
What Apple re-built X to do was to accept a variety of formats on ingest (not universal, but pretty functional, IMO) and then do the transcoding in the background while letting the editor go about the business of doing the work of editing.
It’s a smart modern approach that I understand other software employs as well.
Obviously “codecs” are a moving target. As long as the capability to do background transcoding is in place – the stopper for working with ANY format in a system like X’s must be how easy or difficult the placed format is to transcode. Encodings with fewer “actual” frames and more “predictive” or “calculated” frames” will obviously be harder for any software to work with rapidly. But we’re already seen that it’s possible to build hardware level encoder/decoders that rapidly handle stuff like this. And I suspect that as workflows like X’s grow in popularity – we might see more “transcoder boxes” that let you add hardware t to work efficiently with your particular flavor of footage. Isn’t this essentially what the KiPro and it’s kin are? They take a feed and do a ProRes transcode on the fly.
So only those that NEED the ability for super-rapid conversion have to pay for it.
Everyone else has to deal with the “slow but built-in” CPU or GPU based transcoding.
Seems perfectly fair to me since the other path is that the engineers have to “optimized” the software for all manner of footage types that I may never actually use.
Remember, this is general purpose affordable software. If you wish to use it with RED footage, isn’t the presumption that you have the budget for access to something like a KiPro that simply makes the problem go away?
“Before speaking out ask yourself whether your words are true, whether they are respectful and whether they are needed in our civil discussions.”-Justice O’Connor
-
Misha Aranyshev
February 4, 2012 at 11:53 pm[Bill Davis] “isn’t the presumption that you have the budget for access to something like a KiPro that simply makes the problem go away?”
KiPro doesn’t make the problem go away because video out on the Red One camera is either 60 or 50 fps no matter what actual shooting fps is and the timecode embedded in the Red One video out signal is 3 to 6 frames ahead of the actual timecode in recorded R3D.
-
Bill Davis
February 5, 2012 at 12:12 am[Michael Aranyshev] “KiPro doesn’t make the problem go away because video out on the Red One camera is either 60 or 50 fps no matter what actual shooting fps is and the timecode embedded in the Red One video out signal is 3 to 6 frames ahead of the actual timecode in recorded R3D.”
I’m out of my depth with RED stuff, but that sounds suspiciously like a digital cousin of the “sound over distance’ audio latency I’ve been dealing with for my whole career on live events.
If it’s constant, can’t you just RE-ID the signals frame to offset all the TC values by whatever the miss-match is? The digital equivalent of inserting a fractional signal DELAY when doing live audio and you need to “time sync” multiple speakers at variable distances from a stage?
If the problem is any kind of simple “math offset” I can’t believe it will be long before someone writes a few lines of updated code to fix it.
Seems kinda trivial. But that’s probably because I don’t actually understand the issues you’re facing well enough.
“Before speaking out ask yourself whether your words are true, whether they are respectful and whether they are needed in our civil discussions.”-Justice O’Connor
-
Misha Aranyshev
February 5, 2012 at 12:35 amTimecode on the video out is ahead of timecode in recorded file and the offset is not constant.
Anyway, offline/online workflow survived transition from dilm to videotape, from linear to non–linear, from tape to tapeless, from film to “digital negative”. It simply makes more sense in more situations.
-
Michael Hancock
February 5, 2012 at 12:53 amYou’re talking around the question Bill.
[Bill Davis] “And what does “require a re-wrap” mean in the modern context.”
The same thing it’s always meant. Can it read the native file that comes off the camera or does it rewrap it as a Quicktime, or transcode it to a Quicktime. Let’s not play semantics and redefine things (we’re not Apple) – does it read P2 .mxf or does it have to be .mov? XDCam or .mov? Pretty basic question and elementary thing to expect a modern NLE to do (read camera native files).
[Bill Davis] “What Apple re-built X to do was to accept a variety of formats on ingest (not universal, but pretty functional, IMO) and then do the transcoding in the background while letting the editor go about the business of doing the work of editing.
It’s a smart modern approach that I understand other software employs as well. “
Is FCPX’s implementation really that smart? If it requires .mov files for everything it fills your hard drive up with duplicate media as it rewraps in the background. Even if it happens in the background, is it really necessary?
Other NLEs function quite differently – Avid reads a ton of stuff native but some formats are more usable when transcoded (but not in the background – hopefully in the future). Adobe and Edius are all native all the time, until export. I believe Vegas is mostly native right now, isn’t it? I’d venture to say FCPX has the smallest list of natively supported camera files, and it’s a “scrape to the foundations and rebuild for the modern era” NLE.
[Bill Davis] “And I suspect that as workflows like X’s grow in popularity – we might see more “transcoder boxes” that let you add hardware t to work efficiently with your particular flavor of footage. Isn’t this essentially what the KiPro and it’s kin are? They take a feed and do a ProRes transcode on the fly. “
Workflows like X’s? What is that workflow? Requiring Quicktime files to immediately begin working, or is it something else? That’s very vague Bill. Let’s be specific to really discuss the benefits, and drawbacks, of FCPX in a modern world.
KiPro is great for something like a DSLR, but for XDCam? P2? AVCIntra? The camera’s native files are fine on their own – the issue is with FCPX’s ability to handle them, isn’t it? Background transcoding is great and sounds like it’s implemented very well, but it shouldn’t be necessary for some of this stuff. I may be wrong though – FCPX may playback P2 .mxf files fine, right off the card, without rewrapping, but you never answered that. Does it?
[Bill Davis] “So only those that NEED the ability for super-rapid conversion have to pay for it.
Everyone else has to deal with the “slow but built-in” CPU or GPU based transcoding.
Seems perfectly fair to me since the other path is that the engineers have to “optimized” the software for all manner of footage types that I may never actually use.”
Avid and Edius and Adobe and Vegas are able to provide playback of all types of footage (some better than others). Either they have brilliant engineers or Apple decided native camera formats aren’t important or necessary (like tape?).
These are things I think should be considered when talking about how FCPX might grow for the future and how it integrates into modern workflows. When a new camera comes out, should FCPX support the native camera files, or should the camera manufacturer program their camera to record to multiple codecs and file formats? Or should camera men and women purchase hardware transcoders for clients that choose NLE A rather than NLE B? Where does the responsibility lie?
—————-
Michael Hancock
Editor -
Brad Davis
February 6, 2012 at 10:28 pmB
Not a fan of how the audio is handled in FCP X in general right now. Really cannot use it in my shop because of it.
Reply to this Discussion! Login or Sign Up