Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Creative Community Conversations NLEs, DAWs, Tracks and Audio-centric Workflows — Continuing the Conversation…

  • David Lawrence

    September 30, 2011 at 3:45 am

    [Andrew Richards] “I maintain that AVFoundation forces nothing on the UI. The UI is 100% abstracted from the OS frameworks that handle the actual bit-laying. Walter Soyka! Back me up!”

    LOL, Andrew, I’m in total agreement with you and Walter on this. Sorry if my point wasn’t clear. UI is always 100% abstraction. It can be anything the UI designer wants it to be. That’s why many of the UI decisions in FCPX are so baffling. For me, they only start making sense if you imagine that the FCPX UI was designed by the engineers. The constraints, inconsistencies and mental models in the UI feel like they’re driven by an engineering data model, not an understanding of users.

    That said, I do wonder how much of the UI is baked in. Why for example, is their solution for adding transitions to connected clips, to turn them into secondary storylines? My hunch is that the data model demands it.

    _______________________
    David Lawrence
    art~media~design~research
    propaganda.com
    publicmattersgroup.com
    facebook.com/dlawrence
    twitter.com/dhl

  • Michael Gissing

    September 30, 2011 at 3:53 am

    [Jeremy Garchow] “I think that audio and video are very different, and have very different philosophies, and of course are extremely complimentary. Certainly, they can both motivate each other.”

    I have spoken to software developers about the idea of using video composite layer techniques in audio. Straight opacity is a bit like mixing in audio terms but key, difference, luma type vision mixing is different. Similarly audio mixing might be blending sound based on dynamics or frequencies in audio.

    A typical one hour doco project for me will have around 5,000 clips so the ratio of audio to picture elements is hugely different. The other great difference is data management. I have a sound effects library integrated in the Fairlight that manages 30,000 + sound clips. We don’t need meta data however as clip names are descriptive so standard word searches with + – delineators enabling quick search and inline auditioning on the track before a simple Enter to paste.

    If FCP X can use metadata to do similar manipulation, then editing should be a much nicer process.

  • Andrew Richards

    September 30, 2011 at 4:34 am

    [David Lawrence] “That’s why many of the UI decisions in FCPX are so baffling. For me, they only start making sense if you imagine that the FCPX UI was designed by the engineers. The constraints, inconsistencies and mental models in the UI feel like they’re driven by an engineering data model, not an understanding of users.”

    Yeah, the thing is, I do think they were modeling the UI after a particular data model, just not one made mandatory by the structures of the underlying media-handling frameworks. As we’ve discussed at length, they anchor clips to other clips rather than to a time frame. Time is kept, but isn’t the structure for keeping track of what goes where. Time is just a meter for the music, if you’ll pardon my hijacking of the metaphor you opened this thread with.

    I think the broad idea was to find a way to capture a more explicit expression of editorial intent. In an open tracked timeline, intent is all in the mind of the user. As far as the software is concerned, that lower third only happens to sit atop the right bit of talking head. That music just happens to come in at the right beat in the action above it. Tracks 1 and 2 are dialog, 3 and 4 are music, and 5 and 6 are SFX, but the NLE doesn’t know that.

    In the magnetic timeline, the software is actually told what clips are married to other clips. I think this is all born out of a philosophy that the software should try to glean as much actionable metadata as it can without asking the user to make ancillary inputs strictly for the sake of metadata. They can’t eliminate all manual entry of metadata, but they can make it necessary and once metadata is ubiquitous, they can start automating things.

    I also agree this all reeks of engineering, and that is probably why it appeals to me so much. From an engineer’s perspective, everything is input/output, and you can satisfy all I/O requirements with metadata. Roles are an excellent example, they get you to the same ends as audio track conventions. Users on the other hand, are concerned with technique as much as they are with I/O (if not more so). The I/O needs to be there to get the job done, but the technique is craft, and craft is sacred.

    I imagine the evolution like this: they wanted to capture explicit editorial intent, so they think up clip connections. These could work on tracks, but they make collisions much more difficult to manage while maintaining track roles if you have a stack of staggered connected clips you want to move in concert. So get rid of the tracks! OK, but what about the roles we conventionally assign to tracks like DME? Explicit metadata tags! So now we can capture explicit intent, prevent collisions while moving these stacks of explicitly connected clips, capture more explicit intent in the form of roles metadata, and then use that to route output. Cut! Check the gate!

    I have no ideas for how capturing all this explicit intent might be exploited for additional functionality, but that’s what it looks to me like they are aiming for.

    Best,
    Andy

  • Andrew Richards

    September 30, 2011 at 4:37 am

    [Michael Gissing] “We don’t need meta data however as clip names are descriptive so standard word searches with + – delineators enabling quick search and inline auditioning on the track before a simple Enter to paste. “

    Well, technically, clip names are metadata.

    Best,
    Andy

  • David Lawrence

    September 30, 2011 at 4:37 am

    [Michael Gissing] “I have spoken to software developers about the idea of using video composite layer techniques in audio. Straight opacity is a bit like mixing in audio terms but key, difference, luma type vision mixing is different. Similarly audio mixing might be blending sound based on dynamics or frequencies in audio. “

    Michael, thanks for bring up the Fairlight and its innovative UI for audio. This is the closest thing I’ve seen that uses layers as an approach for audio compositing. I’ve played with it a bit but never really got deep into it. It seems interesting:

    https://www.audiofile-engineering.com/waveeditor/

    Also, you spoke about the role of tracks. I’m also curious what you think about ripple-mode for edits. From your POV as an audio post specialist, could you do your job if your tools only operated in ripple mode?

    _______________________
    David Lawrence
    art~media~design~research
    propaganda.com
    publicmattersgroup.com
    facebook.com/dlawrence
    twitter.com/dhl

  • Franz Bieberkopf

    September 30, 2011 at 4:38 am

    David,

    … a great theme for discussion.

    Is there a short summary – that thread you link to is very long and meandering with lots of cross-topics. I couldn’t find what might be the start of this.

    But I wanted to chime in first to suggest that musical composition is more an analog of editing (rather than metaphor).

    It has been my long sad lament that editing software is primarily viewed as a visual realm (by both designers and users). (Already in this thread it’s been shunted in that direction.)

    I see no reason not to expect most of the functionality of a DAW in an NLE.

    Franz.

  • Michael Gissing

    September 30, 2011 at 4:46 am

    [Andrew Richards] “Well, technically, clip names are metadata.”

    My point was that in tapeless video, clip names are useless as a descriptor of content so meta data is added. Audio libraries use the name as the descriptor similar to log & capture using a name descriptor as the file name. You can read this data without digging into the file headers to read additional (meta) data.

    Meta data is essential in a tapeless camera world and I would love to see the meta data descriptor transferred via an OMF rather than the clip name.

  • Michael Gissing

    September 30, 2011 at 4:52 am

    [Franz Bieberkopf] “I see no reason not to expect most of the functionality of a DAW in an NLE.”

    Particualrly as DAWs like Fairlight can import FCP7 XMLs and can do basic cuts & dissolve video edits. You can even edit H264 and mpeg2 natively in Fairlight.

  • Andrew Richards

    September 30, 2011 at 4:55 am

    [Michael Gissing] “My point was that in tapeless video, clip names are useless as a descriptor of content so meta data is added. Audio libraries use the name as the descriptor similar to log & capture using a name descriptor as the file name. You can read this data without digging into the file headers to read additional (meta) data.”

    It’s an extra step, but tapeless video clips can be systematically renamed to suit what you’d like to see. FCP7 offered a similar capability during Log & Transfer. FCPX lets you do it at any time. It sounds like it would be well-worth the effort if it aids the post pipeline downstream from the editor.

    Best,
    Andy

  • David Lawrence

    September 30, 2011 at 4:56 am

    [Andrew Richards] “I think the broad idea was to find a way to capture a more explicit expression of editorial intent. In an open tracked timeline, intent is all in the mind of the user. As far as the software is concerned, that lower third only happens to sit atop the right bit of talking head. That music just happens to come in at the right beat in the action above it. Tracks 1 and 2 are dialog, 3 and 4 are music, and 5 and 6 are SFX, but the NLE doesn’t know that.”

    I guess it depends on how we define “editorial intent”. I would argue that editorial intent is explicit and intrinsic to spacial positioning in the timeline. When I look at a timeline I read editorial intent like a musician reading sheet music.

    The idea that the software might understand the edit is interesting, but I’m hard pressed to think of any examples where it would be useful. Can you describe an example of the potential value?

    _______________________
    David Lawrence
    art~media~design~research
    propaganda.com
    publicmattersgroup.com
    facebook.com/dlawrence
    twitter.com/dhl

Page 2 of 6

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy