Creative Communities of the World Forums

The peer to peer support community for media production professionals.

  • Bill Davis

    July 31, 2015 at 5:07 am

    Not directly on point but of perhaps a tiny bit of interest…

    Since Apple introduced the new 3D text system to FCP X, a very interesting side effect has been cropping up.

    First, Mark Spencer (the Motion guru) noted that the system in X does 3d transforms not just on on fonts, but on anything in the form of a glyph.

    Then somebody figured out that Glyphter, the free, easy utility – can efficiently turn SVG stuff like say, a corporate logo into a glyph – which can then be extruded and rotated and have multiple surface texture mapping with multiple bevels – and multiple cameras and lighting – applied directly in FCP X’s new 3d Type system.

    I wish I could show some of the early work that’s popping up, but sadly it’s on a private board where we talk about techniques and the stuff shown is often not rights cleared, so it has to stay private.

    I’ll just say that some of the 3D Text (and now graphics like corporate logos) being created and processed directly inside of X has been a bit surprising in it’s sophistication.

    These are baby steps, of course, not serious compositing – but the 3D math is there and it’s generating some pretty cool results for a $299 NLE.

    Just an interesting side note.

    Carry on.

    Know someone who teaches video editing in elementary school, high school or college? Tell them to check out http://www.StartEditingNow.com – video editing curriculum complete with licensed practice content.

  • Andrew Kimery

    July 31, 2015 at 6:03 am

    [Bill Davis] “Then somebody figured out that Glyphter, the free, easy utility – can efficiently turn SVG stuff like say, a corporate logo into a glyph – which can then be extruded and rotated and have multiple surface texture mapping with multiple bevels – and multiple cameras and lighting – applied directly in FCP X’s new 3d Type system. “

    I remember seeing someone take the Apple poop emjoi and make it 3D. Is this how they did it?

  • Charlie Austin

    July 31, 2015 at 6:39 am

    [Andrew Kimery] “I remember seeing someone take the Apple poop emjoi and make it 3D. Is this how they did it?”

    That would have been me. And yes, that’s how I did it. 🙂

    ————————————————————-

    ~ My FCPX Babbling blog ~
    ~”It is a poor craftsman who blames his tools.”~
    ~”The function you just attempted is not yet implemented”~

  • Walter Soyka

    July 31, 2015 at 12:03 pm

    [Bill Davis] “Since Apple introduced the new 3D text system to FCP X, a very interesting side effect has been cropping up.”

    Courtesy of the FCPX or Not Forum, on Motion 5.2 launch day?

    https://forums.creativecow.net/thread/335/79619#79634

    And credit where it’s due, this was a Flame production technique that’s probably old enough to drink.

    [Bill Davis] “I’ll just say that some of the 3D Text (and now graphics like corporate logos) being created and processed directly inside of X has been a bit surprising in it’s sophistication. These are baby steps, of course, not serious compositing – but the 3D math is there and it’s generating some pretty cool results for a $299 NLE.”

    Exactly! Wouldn’t you agree that having this capability in editorial context is an advancement?

    Walter Soyka
    Designer & Mad Scientist at Keen Live [link]
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    @keenlive   |   RenderBreak [blog]   |   Profile [LinkedIn]

  • Walter Soyka

    July 31, 2015 at 12:15 pm

    [Andrew Kimery] “To keep the example simple, let’s just say an editor and a mixer are both working concurrently on the same edit. Having a truly shared timeline that updates in real time sounds horrible to me because the media in the timeline will be constantly changing. We will be constantly stepping on each other’s toes.

    If the editor and the mixer have two versions of the same timeline then we won’t be stepping on each other’s toes, but at the end of the day someone will have to take the changes made in each timeline and conform them into a single timeline. A computer can track the changes but it won’t know which changes to apply when both the editor and the mixer have

    Waiting until the end of an edit to start the finishing process isn’t just done because we have incompatible tools, it’s done so that time and money isn’t wasted polishing media that won’t make it into the final cut.”

    There are tools and methodologies outside of our industry for dealing with exactly these issues. For example, software development happens across a team in parallel, with source control, check-in and check-out, and tests to find and resolve conflicts.

    You do get diminishing marginal gain as the team size grows, because as you point out, coordination does take time.

    If you think about the way we work now, it’s a challenge. If you think about the way we would work if we had different disciplines working together at the same time, then you’d probably build a different workflow. Some tasks are clip-based; these can be done in parallel. Some tasks are sequence-based; it may be prudent to wait on these until the end of the process. For tight schedules, it may be worth the risk of having to do something twice to try to get it done earlier.

    [Andrew Kimery] “At some point though I think you have to curb how much non-NLE functionality you put in the NLE and say to the user, “Look, if you want to do some really advanced stuff you are just going to have to cowboy up and learn a dedicated mixing/grading/compositing, etc., app”. If you try and too much other stuff in an NLE I think it can become bloated and too difficult to use by the majority of it’s target audience.”

    Let’s talk about “bloat.” What does that mean to you?

    Apple has done a really good job of not showing you functionality you don’t need. FCPX does a lot, but the UX is so smooth that a lot of people still underestimate its capabilities as iMovie Pro. Is FCPX bloated?

    Walter Soyka
    Designer & Mad Scientist at Keen Live [link]
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    @keenlive   |   RenderBreak [blog]   |   Profile [LinkedIn]

  • Walter Soyka

    July 31, 2015 at 4:11 pm

    [Andrew Kimery] “To make a long story short, after Apple released Color I spent a few years primarily as a colorist and I quickly realized how awesome a dedicated app like Color was, and how limited the built-in correction tools in NLEs were. Today if I have a quick and dirty grading job I’ll do it in the NLE using some Magic Bullet plugins but if I have a ‘real’ grading job I’ll do it in Resolve. After putting in the miles to learn apps like Color and Resolve the thought of doing intensive color work in an NLE makes my skin crawl.”

    If there were a true non-linear workflow (read: no round-tripping), you could do quick and dirty grading in a proper color environment, too, if you wanted. There’d be no penalty for going “out of order.”

    I shouldn’t have accepted the premise of your question before, because a common data model across apps isn’t just about collaboration with separate people. It’s about removing the speedbumps in our workflows, letting artists use any tool at any time in the process.

    Walter Soyka
    Designer & Mad Scientist at Keen Live [link]
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    @keenlive   |   RenderBreak [blog]   |   Profile [LinkedIn]

  • Andrew Kimery

    July 31, 2015 at 4:56 pm

    [Walter Soyka] “If there were a true non-linear workflow (read: no round-tripping), you could do quick and dirty grading in a proper color environment, too, if you wanted. There’d be no penalty for going “out of order.””

    I was using CC2014 so I could’ve sent the timeline to SG if I wanted too but it was still much quicker/easier just to drop on Red Giant’s Colorista II and/or Mojo filter, make a couple of small adjustments and move on. In this particular case just tweaking the plugins’ presets got me to where I wanted to be and trying to build the same thing from scratch would’ve been more time consuming.

    I had to grab something off the rack because the client didn’t have the time/budget for me to make it from scratch. 😉

  • Andrew Kimery

    July 31, 2015 at 5:47 pm

    [Walter Soyka] “There are tools and methodologies outside of our industry for dealing with exactly these issues. For example, software development happens across a team in parallel, with source control, check-in and check-out, and tests to find and resolve conflicts. “

    But how many of those methodologies used outside our industry are applicable to our industry? With software development how many people are working on the exact same piece of code at the exact same time? I was thinking about the ease of which Google Docs works, but working collaboratively with text documents isn’t really analogous to working collaboratively with audio/visual media in a post production environment.

    I think it was in Resolve 11 that they introduced a feature where editing and grading could be done concurrently. I never heard anyone talk about so I have no idea how it works (or how well it works). If it’s a check in/check out system then it would probably work okay because if there’s a dedicated colorist on the project, and we are working in parallel, then I’m probably not going to waste time doing any rough grading during the edit. If it’s a live updating system then, as an editor, it would drive me insane to see the footage I’m trying to cut getting graded right in front of me.

    GFX/VFX can work in parallel too because, much like grading, the editor isn’t going to be creating and iterating the final GFX. A rough temp or title card might dropped in as place holder but that’s it.

    Audio is a whole different kettle of fish though. Picture editing cannot be divorced from sound editing so there will always be a ton of conflicts to resolve if an editor and mixer are trying to work in parallel too early on. If parts of the process had to be accelerated due to a looming deadline I’d much rather start the picture finishing early and hold off on audio finishing until the latest possible moment.

    [Walter Soyka] “Let’s talk about “bloat.” What does that mean to you?”

    To me bloat can be different things that may or may not be related. Bloat can be used to describe a UI that looks out of control, a program that feels slow/sluggish (due to unoptimized/excessive code), and/or bloat can be used to describe feature creep. I think many times bloat starts with feature creep which then leads to UI and under-the-hood problems. Of course one man’s bloated tool might be another man’s all-in-one super app.

    I think bloat, like porn, is easier to recognize than to define. I think the difference between adding functionality and adding bloat is the difference between keeping your target user in sight vs getting lost in the weeds. Mindfully adding useful functionality vs adding features might be another way to look at it. Good recent examples I think could be 3D text in X and Lumetri in PPro. A full time animator or colorist might find these tools too limiting, but for someone that wears multiple hats they could be a good balance of accessibility and functionality.

    [Walter Soyka] “Apple has done a really good job of not showing you functionality you don’t need. FCPX does a lot, but the UX is so smooth that a lot of people still underestimate its capabilities as iMovie Pro. Is FCPX bloated?”

    I can’t say because I’ve barely touched X, but my guess would be ‘no’. I think bloat usually happens to a much more mature product, and part of it is probably tied to trying to increase perceived value so people will keep buying new versions of the software. Given Apple’s current model of giving away software for free (or for a pretty low, one-time price) I don’t think we have to worry about Apple over-adding features while enticing people to upgrade.

  • Walter Soyka

    July 31, 2015 at 6:22 pm

    [Andrew Kimery] “f it’s a live updating system then, as an editor, it would drive me insane to see the footage I’m trying to cut getting graded right in front of me.”

    Yeah, I think that’d be a disaster.

    Think more about the old Final Cut Server ideal; imagine you, the editor, could push out shots to other departments. Have a great clip with bad audio? Want to see if it can be salvaged? Send it to audio. C-stand ruining your day? Push it to VFX.

    Or, if you’re the one man band type, do it all yourself, in any order you like, and keep the flexibility to make changes without having to back all the way up to the beginning of the linear workflow and push it through again. Computers can and should be doing that bit for us.

    Walter Soyka
    Designer & Mad Scientist at Keen Live [link]
    Motion Graphics, Widescreen Events, Presentation Design, and Consulting
    @keenlive   |   RenderBreak [blog]   |   Profile [LinkedIn]

  • Andrew Kimery

    July 31, 2015 at 7:05 pm

    [Walter Soyka] “Think more about the old Final Cut Server ideal; imagine you, the editor, could push out shots to other departments. Have a great clip with bad audio? Want to see if it can be salvaged? Send it to audio. C-stand ruining your day? Push it to VFX.”

    I can certainly see the benefits of simplifying the I/O process between apps. When I mainly colored my dream was to be able to work in the same timeline that the editors cut in. Not necessary to work in parallel with them, but just to be able to avoid the FCP to Color back to FCP w/new media hassle. I was like, “Why can’t what I do in Color just apply like a filter vs rendering out new media?” Now we have that relationship between PPro and SG and Resolve offering a parallel workflow (which as previously mentioned I know nothing about).

    [Walter Soyka] “Or, if you’re the one man band type, do it all yourself, in any order you like, and keep the flexibility to make changes without having to back all the way up to the beginning of the linear workflow and push it through again. Computers can and should be doing that bit for us.”

    If you are a one-man-band type then you are probably more inclined to keep it all in the NLE and just buy plugins to fill the specific holes you need filled. Many people just need some milk, not the whole cow. 😉

    Somewhat recently I worked on a historical doc in Avid and it was a royal PITA because we had a lot (I mean a lot) of stills and Avid is horrible with stills. Just unbearably horrible. We used AE to do moves on the stills but it felt like using a sledge hammer to swat flies since in any other major NLE we could have just imported the stills at full res and done the moves inside the NLE.

Page 9 of 10

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy