Adobe, Avid, and Blackmagic Design: The Big 3 NLEs at NAB Show 2019

Adobe, Avid, Blackmagic Design: The Big 3 NLEs at NAB 2019

This year at the NAB Show 2019 we saw some of the most anticipated releases in video editing software history.

Perhaps the most celebrated is Avid Media Composer’s new user interface, coming out end of May with a release that also includes a 32 bit full float color pipeline, ACES support, and other improvements for finishing.

A refreshed user experience has been at the top of many user’s feature request lists for years, and after nearly six years of Avid Customer Association feedback and a handful of annual ACA votes which ask users to rank their needs, a new UI has emerged. This was undoubtedly no small task, as Avid had to tear down a 25 year foundation and create an experience that would appease new and veteran users.

Avid’s facelift also comes with shiny new code, making performance faster than before. Sloughing off the excess and cutting down time wasted loading projects, importing, and exporting. They now offer background rendering and transcoding and live timelines with edit-during-playback.

Adobe and Blackmagic Design have done the same with Premiere Pro and Resolve 16 respectively, announcing huge innovative leaps towards AI and machine learning. Their newest features alleviate common tedious editing tasks, while boasting faster and more efficient ways of GPU performance.

on the ground at Avid Connect and NAB Show in Las Vegas, NV

Rob D’Amico – Director of Product Marketing for Audio & Video Solutions

CreativeCOW: Everyone is buzzing about the new UI, can you talk about what’s new for Media Composer 2019?Rob D’Amico: Out of the gate with Media Composer it’s all about a better user experience with a new UI, removing some of the clutter and having that panel view so you can move around really fluently and easily. Having the task-based workspaces and being able to rearrange your layout quickly based on what part of production you’re in, for audio mixing versus editing and effects versus grading and coloring.

Media Composer 2019 Edit Workspace
Media Composer 2019 Audio Workspace
Bin map view located top right

We’ve rebuilt our engine to support 32-bit floating color pipeline, so you can have confidence round-tripping with other applications that your color won’t get clipped or your file support is adhered to. We’re going to natively support OP1A, so you can drop in the media files and it’ll show up in your Media Tool [instead of using AMA Linking].

We’ve also worked closely with The Academy to support ACES color space natively right within Media Composer. They’re creating a new category called Editorial Finishing Category, based on the work that we did to support that color space. And as far as the work that we’re doing for delivery, we now support directly out of Media Composer exporting IMF packaging. And also inputting into it for a complete round trip workflow.

How will you be able to measure when people adopt the new user interface and where do you expect that usage to be in the next couple of years? When are you hoping for users to switch over? 

One of the things that we did in leading up to this new UI, is a direct reflection of customer feedback. We have Avid’s ACA, the Avid Customer Association. We work directly with many members of the ACA and getting their feedback early on even in the wire frame modelling of this new UI. All the way through to the alpha program and continuous to the beta program.

So to answer your question, this isn’t a one and done update. This update, once we release at the end of May, is going to continue throughout the year to make sure that one we’re meeting the constant needs of our customers and also new customers.

We want to make sure the next generation editor feels comfortable coming over to a Media Composer or working in a Media Composer that feels modern. We don’t want to be known as that dinosaur video editor. We’re the leader in what we created there and we’re going to continue that innovation.

It’s been on the mind of everybody here. We have a long history of editors using Media Composer, grew up on Media Composer, and don’t really like change. They’ve been using a tool, they know how to use it, don’t change it. That’s why it was so important to get their input in a lot of this.

Media Composer 2019’s release at the end of May, along with Media Composer | Distributed Processing, goes beyond background rendering and transcoding, allowing users to completely offload processor-intensive tasks to other machines, either idle or dedicated, that are on their network. And the newest member of the family: Media Composer | Enterprise which allows you to adapt all of Media Composer’s capabilities for different roles in your organization.

Steve Forde – General Manager, Emerging Products, Digital Video and Audio

COW: What are the most exciting Adobe features coming out at the show?

Steve Forde: Content Aware Fill is probably the most popular one. It’s probably the biggest response for everything out of the releases. Basically, Content Aware Fill comes from Photoshop around five or six years ago, and it’s oriented around still image manipulation. We always wanted to figure out how can we do this for video.

Whether you want to remove microphones or wires or rigs, something that came into a shot that just didn’t make it perfect. And instead of reshooting, which is expensive and time consuming, Content Aware Fill uses machine learning and AI to remove an object frame by frame with the surrounding pixels. That’s where the AI comes in and understands them contextually to get a clean removal of the object.

Somebody made a video just yesterday using Friends footage from the TV show where, in four and a half minutes, they took complete characters out of the scene. That can take days, right? Just days of going frame by frame by frame and trying to do a mask track and it’s hard to do. So that’s why we’re getting such a positive response with that feature. [Editor’s Note: I hope they took out Ross.]

We’ve also provided new features for expressions, specifically a user coding like color and organization, to be able to understand where to find a bug and making that easier to do. But I think the biggest thing that I was excited about with expressions is just how fast they are. First and foremost, rendering, previewing in After Effects, having the expression process faster allows me to see my result faster. So that makes me not just more productive, but actually it makes me be a better creative. And then at the same time the export goes out fast as well.

New Expressions Editor in After Effects

Expressions can be hard to wrap your head around when you’re less familiar. Is there anything in the pipeline for making expressions more accessible to everyone? Maybe having a built-in expression generator, or something to that effect?

That’s a good question. I think especially with the motion graphic templates, it’s a technology that’s really popular in terms of authoring in After Effects. And a lot of the things that we’ve looked at are not just how we make it easier to author motion graphics templates, but also how you would achieve that with scripting in expressions.

So, to really amp that up you have to look at a lot of user expressions. You have to have a pretty hefty skill to understand how to use them. So, making that more exposed to a broader set of users by more cleaner and forward- facing user experience, we think there’s a lot of potential. So there is a big investment in that. It’s something that’s a big focus for the team.

I know that Blackmagic Design just announced predictive clip trimming AI. Is that something you all are looking at, helping the cut and edit process? Speed it up, have it guess your trends and help you in that process? 

We have one fundamental model for what we do with all of our AI and machine learning: what are the tedious tasks? What are the things that burn a lot of time, that don’t get you towards the creativity work you’re trying to get to?

We’ve done those things with Adobe Sensei technology in the past because they were really hard to do. Content Aware Fill, again, is really hard to do. You can do it, you could do it before we shipped this feature, but it was so time consuming. It’s not that you couldn’t get to a great result before, but it’s where we’re focusing using a AI and machine learning algorithms to really try to take that tediousness away. And I think that’s the moral of the story for any machine learning and AI technology.

On this release we were able to take things like mask tracking in Premiere Pro and make it run 38 times faster that the previous release. On the same hardware. And we’re able to use multiple GPUs that you could actually plug into a laptop from an external chassis. 

Adobe Premiere Pro April 2019 release (version 13.1.2) is out now, check out the list of features here including ruler and guide setups, faster mask tracking, file support for Sony Venice V3, hardware decoding improvements for H.264 and HEVC on macOS and more.

Dan May – President of Blackmagic Design

What’s the rundown of the new Resolve 16?

Dan May: We’ve never really created something from the ground up that had never been tried before. And we’ve been filling out our editing solution and talking to editors out there, and one thing we kept running into was that people wanted things to go faster. How do I make things go faster?

And the challenge became any time we thought about doing something, our concern was that if we changed the edit page, it would really screw with people’s editing capability. Because all those pages are culturally so important to the people that work in those pages.So, that’s really why we created a new cut page and that’s where we see those features like the ability to do multi-point edit, insert to timeline, all of the functionality that we’ve added in there. The AI is trying to take a best guess based on what you’ve already cut and put that in the timeline for you.

DaVinci Resolve 16 Cut Page

So that whether I’ve been on a big shoot all day and I’m just getting on the train and I want to do a quick edit on the train ride home. Or whether I’m some thirteen year old kid that did a bunch of streaming stuff and I want to cut it up really quickly. How can we make that useful to everybody? But also not forcing it on them, saying you have to do it this particular way.

We’ve also added facial recognition to the AI. You’ve got a five actor shoot and after defining who your actors are if you add 20 more shots the AI automatically puts them into bins for you. The AI works in the same way for smart insert, masking, and object removal.

The source tape is a new feature allowing you to see the entirety of the timeline. So I want to go to the front, I just click on the front of the source tape, etc. Below that I have the zoomed in timeline, which is going to let me do all my finesse parts and I can scroll through.

Dual Timeline; left, Smart Insert; right

It seems that Adobe announced their versions of facial recognition and object removal this year as well, was this something that was a huge demand in your market?

Yeah. Obviously we’re talking to some of the same customer bases that are out there. They were those same ideas of speed. So anything you can do to speed up that process, even if it’s not a perfect process, then manipulate it and adjust it. That’s the whole benefit of AIs. For someone it will do a great job, for other people it will do a good job but now I can go in and manipulate that further. It’s just those tools that make you more efficient. 

Who does it help? I hope it helps all of them. I hope it helps the high end, I hope it helps the mid- level folks that are doing their own projects, I hope it helps people that are coming to the table for the first time to see the benefits of what this technology has to offer.

On that note making it accessible to everybody, is there any talk about having settings within the software that would take certain features away depending on your role or experience level with the software?

One of the big concerns we have about that is some of these labels. What does it actually mean that you’re a new person? Or that you’re an experienced person? Let’s say you’re a student film maker, well we don’t want to make presumptions that you either don’t use certain functions or they don’t need certain functions. The other theory is if someone’s just starting out, they might be missing out on some of the incredible tool sets that are available to them.

So the fear is you don’t want to overwhelm them, but we want them to know that those tools are there. We get worried about limiting people’s awareness of our tools, but it’s been a robust conversation internally that we have on the regular.

What kind of target consumer are you hoping will buy into the new keyboard you’ve released this year?

When we look at ancillary devices, the people that buy them are usually doing a repetitive action, over and over and over again. If you are doing it every day, you’re trying to minimize those motions to be as quick as possible. So, that’s where being able to actually edit with two hands comes into play.

I think small post houses where you have specific roles driven, so not necessarily one person taking the project from start to finish. Because when you’re working in those environments, that’s where time is really a sensitive thing. You’re handing those projects off to other people who then work on.

Grant Petty unveils the new editing keyboard.

Blackmagic Design DaVinci Resolve 16 is available for download a beta version now. Check out their list of features including DaVinci Resolve Neural Engine for AI and deep learning features which detects faces, auto creates bins, auto colors, shot matches etc; an added dual timeline to edit and trim without zooming and scrolling, and intelligent edit modes to auto sync clips and edit.

The COW Takeaway

The future of Media Composer’s new user interface is bright, though time will tell how it integrates into existing users’ daily life.  My time with attendees at Avid Connect 2019 (and Avid’s wild after party Saturday night) showed me feelings of upbeat enthusiasm or at least a cautious willingness to try it out. Compared to the general response of the concept of a new user experience a couple years ago, this bodes well for uptake in the community.

At the Wynn in Las Vegas, Avid Connect offered hands-on training with the new UI. There were a few expected kinks, but the usability and intuitiveness was finally present in the software. It seemed few, if any, of Media Composer’s previous features were sacrificed, while providing a new platform to expand the software’s capability. Within 10-20 minutes, I was truly excited to be an Avid consumer again, since learning Media Composer five years ago.

Purple lights and giant timelines at Avid Connect 2019

A roadblock for Media Composer, at least in my part of the industry, is a lack of dual-install, creating no other way of learning (or trying) the new UI without jumping in head first. This is a huge potential downside for small businesses and large production houses to make the switch. A lot of us are ready and willing, but our colleagues or stakeholders aren’t ready for something new. All in all, this change is long overdue and a very welcome announcement to many. Time will tell if Media Composer’s loyal, albeit stubborn, consumer base will embrace the new UI or let it sit idle, as there are no plans yet of forcing the update.

While we wait for Media Composer’s large strides to catch up, we see huge trends in AI and machine learning from Adobe and Blackmagic Design with shot matching, facial recognition, object removal, clip trim prediction and more. These features are truly groundbreaking for editors like me who spend hours of time we don’t have masking a logo that’s burnt into footage, or scouring bins for a shot that doesn’t include the band member who just left the group. Your clients and creative directors don’t understand how long that takes, but you do – and those hours count. 

However, not everyone is as “stoked” on AI and machine learning as I am. Many are worried that eliminating tedious tasks leads to eliminating tedious jobs entirely from our industry, causing a scarce job market, despite many industry luminaries arguing just the opposite. Whatever side you stand on, the ultimate goal is giving editors, filmmakers, or motion graphic designers more time to focus on creativity. Working smarter, not harder, is the key to growth. And that’s something we can all agree on.

With another exciting year of releases at NAB, we look back comparing last year’s theme of accessibility to this year’s theme of innovation, and can’t wait to see what happens next. This year’s NLE’s have pushed us further down the path where “fast, good, cheap – pick two” is increasingly an anachronism for our software. Not only are NLEs increasingly making us faster while utilizing our existing hardware for no additional costs, they’re allowing editors of all types to be stronger than ever.

Hillary was also featured as a guest on NAB Show Live as part of the Gals N Gear programming, discussing her role and how trends are affecting it.

Enjoying the news? Sign up for the Creative COW Newsletter!

Sign up for the Creative COW newsletter and get weekly updates on industry news, forum highlights, jobs, inspirational tutorials, tips, burning questions, and more! Receive bulletins from the largest, longest-running community dedicated to supporting professionals working in film, video, and audio.

Enter your email address, and your first and last name below!

Sign up:

* indicates required

Responses

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy