Activity › Forums › Creative Community Conversations › FCP X and Education
-
Shane Ross
June 30, 2011 at 7:52 pm[Adam McCune] “Except that you still have a whole warehouse of the “old Coke” at your disposal…at least for the foreseeable future.”
Nope. Apple pulled FCS 3 off the shelves…recalled all of them…the day FCP X was released. You’ll be lucky to find a used copy on ebay. Or someone who has a new one might sell it at a premium.
Trust me, the uproar would be MUCH LESS if FCP 7 was still available for sale.
Shane
GETTING ORGANIZED WITH FINAL CUT PRO DVD…don’t miss it.
Read my blog, Little Frog in High Def -
John Chay
June 30, 2011 at 7:53 pmI think they don’t care about the uproar. They know the pros will move on and the critics will die off.
Editor/Videographer
-
Adam Mccune
June 30, 2011 at 7:54 pmI understand that it was pulled. I mean YOU have a copy, your own personal warehouse…
That would be step one in Apple making this right (especially in the instance you pointed out to start this thread) is to make FCP7 available for sale again. I didn’t even think of the kids starting school in the fall, unable to purchase a copy of the software they will learn? How messed up is that?
Apple is going to continue to support 7, so let’s see if they offer it up for sale again.
Writer/Radio host/Community Media Advocate
-
Steven Gonzales
June 30, 2011 at 7:57 pmI agree: you need someone who knows how to drive the interface.
However, the interface is not the individual software application and its GUI. The interface is the workflow between real world assets and the application chain.
This is the focus FCP X doesn’t take. For metadata to be useful, it has to pass easily through the chain of applications.
I have rarely had a project that was contained in one application. If I shot DSLR with built in sound and output to web, then my interface IS the one app.
But this model, and its “cloud” based assets which operate transparently (whether the cloud is local or remote) does not work for the realities of the application chain that we must navigate for the foreseable future.
For new workflow chains to come, where perhaps the interface expert also understands and has code tools for the software development kit, I’m sure there will be great workflows created.
-
Misha Aranyshev
June 30, 2011 at 8:11 pmProduction sound comes to me in BWF-poly files with a track for each character, a boom, an M+S pair and a production mix. I sync them to the picture takes and go on cutting the scenes. The production mix track is usually good enough so I just mute all other tracks. But I don’t delete them becuse when picture is locked I have to export each 2000 ft reel with all the sound. Almost every cut in there is either J- or L-cut so it is all arranged in the checkerboard fashion. There is also some M&E but this doesn’t complicate matters. So how exactly do I tag my production sound and how long would it take me? What if the budget is shifted from sound editorial to picture editorial and I need to give the sound editor more sophisticated stems?
-
Herb Sevush
June 30, 2011 at 8:21 pm“YOU may need ‘tracks’ to work, but your AAF/OMF output software and 8-track audio SDI connectivity doesn’t. What’s needed is metadata to identify which timeline asset belongs to which output channel. An interface which allows that to be allocated doesn’t inherently need ‘tracks’.
It needs an editor who understands how to drive the interface.”Bu what’s in it for me?
Audio tracks are an extremely useful visual organizer as well as an efficient way to create a workflow that will naturally work with others. Why should I have to re-learn a universally excepted interface? I’m asking that seriously – make the case. Show me the productivity gains in tagging audio with metadata and how that will make my life MUCH easier – easier is not enough, it needs to be an order of magnitude easier to be worth the trouble.
Herb Sevush
Zebra Productions -
Michael Hancock
June 30, 2011 at 8:32 pm[Paul Dickin] “What’s needed is metadata to identify which timeline asset belongs to which output channel. An interface which allows that to be allocated doesn’t inherently need ‘tracks’.”
I don’t understand how would this work. Do you tag every sound effect as SFX, edit and let them drop anywhere in the timeline, but tell the system to output all clips tagged as SFX to Track 05? Do you tag dialogue at DIALOGUE and export it to track 01? MUSIC tags go to tracks 3/4?
Honest question. I simply don’t see how this is more efficient than tracks – Dialogue on 1/2, music 3/4, SFX 5/6/7/8, etc…
—————-
Michael Hancock
Editor -
Shane Ross
June 30, 2011 at 8:43 pmBy the way, USC isn’t considering FCP X either. No compelling reason to upgrade.
Shane
GETTING ORGANIZED WITH FINAL CUT PRO DVD…don’t miss it.
Read my blog, Little Frog in High Def -
Chris Kenny
June 30, 2011 at 8:47 pm[Shane Ross] “This new version shows that they didn’t get any input from professional editors. “
No, it does not. It shows that they decided to ship the product when it was useful to a substantial number of people (including some professional editors) rather than waiting to implement features required only by certain high-end workflows.
[Shane Ross] “They set loose a guy who designed iMovie”
And the first three versions of Premiere, and the original Final Cut Pro. While it’s possible he’s completely lost it this time around, and an Internet mob knows better, it’s not exactly a foregone conclusion.
[Shane Ross] “You obviously don’t deliver shows for broadcast…nor do you cut promos. We need split track audio, with all the elements separate. Both to deliver, and to take in. And in order to do that, I need to keep my tracks separate…VO on 1 and 2, sound on tape on 3-8, sound effects on 9-12, music on 13-16.”
We do deliver material for broadcast. I am entirely aware of this practice. I can also, however, imagine ways of providing this ability without having to rigidly structure audio tracks in the NLE. And I imagine Apple can as well.
[Shane Ross] “But you can unlock audio from video. And when you do, if you move out of sync, there is no indication that you have…much less by how much. When I make adjustments to video and audio, I do it separately primarily. L cuts, J cuts. I turn off the LINKED option…and when I need it, I use a key modifier to adjust both. The new methodolgy is wrong for my needs.”
I don’t really understand why. In FCP X you can trivially make L cuts and J cuts without unlinking audio, so it can be done with no danger of anything slipping out of sync, and thus less of a requirement for sync indicators.
[Shane Ross] “But it had a keyboard. Not a funky weird one where the keys were in all different places. And it didn’t leave out the texting option, the ability to make a call AND surf the web at the same time. It didn’t take out google maps because “we’ll let third party people give you a map.” It wasn’t a toy phone.”
Well, this goes pretty directly to my “people won’t remember this in a couple of years” argument in the other thread….
The first iPhone actually couldn’t do data and voice at the same time. It had mapping, but no GPS, and in the first release of iOS not even tower triangulation. And while it had SMS, it didn’t have MMS. It also lacked support for third-party apps, tethering, and a bunch of other features widely considered by be ‘standard’.
This idea that Apple might launch a product with a minimal feature set and add features later is not some wild-eyed fanboy optimism. It’s a consistent pattern that Apple has followed.
[Shane Ross] “Trying to improve something by completely changing it.”
It has worked for Apple before. The more I use the new timeline, the more I realize just how carefully considered these new behaviors are. While some people will hate it forever, I think in the long run the initial overwhelmingly negative reaction to the magnetic timeline is going to be seen as knee-jerk. Assuming anyone remembers it happened.
—
Digital Workflow/Colorist, Nice Dissolve.You should follow me on Twitter here. Or read our blog.
-
John Chay
June 30, 2011 at 8:55 pm[Chris Kenny] “We do deliver material for broadcast. I am entirely aware of this practice. I can also, however, imagine ways of providing this ability without having to rigidly structure audio tracks in the NLE. And I imagine Apple can as well”
Can you just admit that this is a huge flaw right now instead of “I imagine Apple can as well?” Why are audio tracks all of a sudden “rigid”? It works. Has been working nicely for many years. If you can’t add anything constructive don’t reply with nonsense. Please.
Editor/Videographer
Reply to this Discussion! Login or Sign Up