Creating the Last Battleground by Hillary Lewis

Avengers: Endgame – Creating the Last Battleground

Avengers: Endgame hit theaters on April 26th, and went on to break some of the biggest box office records in various markets. Setting the record for highest opening weekend gross worldwide in the US, Canada, and more than 40 other markets including Australia, Brazil, China, Egypt, Mexico, and the United Kingdom. It now sits as the second highest grossing film of all time, only behind James Cameron’s Avatar.

Add a Cinemascore grade of A from audiences and a 94% Fresh rating from RottenTomatoes.com and the film’s popularity has far surpassed its predecessor, Avengers: Infinity War.

The Russo Brothers made the seemingly impossible happen in Endgame, with bold narrative choices that wrapped up a 22 movie narrative, and brought every major hero together for one epic battle sequence. We see them impossibly reunited after being blipped out by the infinity stones and fighting together to defeat Thanos. Not without great cost, Marvel fans watched the demise of several legendary characters in the Marvel universe.

With more superheroes brought together in Endgame than all previous installments that came before it, Weta Digital was given a huge task. Creative COW’s Hillary Lewis sat down again with Weta Digital’s VFX Supervisor, Matt Aitken, on how his team devised a workflow to uphold the CG integrity of so many characters and their powers all in one epic battle.

Matt Aitken, Weta Digital

Did you have more VFX artists than Infinity War to pull off the final battle? 

We had a significantly bigger crew this time. Partly because there’s more shots, we did a bigger chunk of this movie than in Infinity War. That was just the way Marvel allocated the work through various facilities. 

And partly because the post schedule was slightly shorter. They ended up shooting most of the end battle sequence and a block of additional photography was scheduled out for September/October of last year. So that material didn’t really get turned over to us until the end of 2018, beginning of 2019.

So slightly more compressed schedule and more shots. We were roughly 400 people on Infinity Wars and ended up at 600 people for Endgame.

When Thanos made his debut in Avengers: Infinity War, he became one of the most expressive CGI/motion capture characters in Marvel history, and, in fact, cinema history. With Endgame, you’ve further expanded upon his facial system to make him more expressive than ever. What were the steps you took to achieve this?

He was pretty successful in Infinity War, but there’s always more that we can do.

One thing we felt working shots for Infinity War, we hit a wall on certain parts of the range of facial expression in particular around corners of the mouth. We felt the rig we built didn’t allow us a full range of expression and had to patch certain frames of his facial work.

Once Infinity War wrapped and we knew we would be working on Endgame, we took the opportunity to do that remedial work and got the facial modeling team to sculpt out some of the shapes that were missing there.

We are constantly developing our facial animation capabilities and there was some new tech that came out which we call ‘deep shapes’. It’s a way of adding fine level detail to the transitions from one facial expression to the next. It’s like another fine octave of shape work and it doesn’t affect the start and the end of the transition.

So we still have complete control over the shape of the face and the facial performances. But what we get with deep shapes is added complexity with a sense of inertia and the mesh of facial tissue, the facial muscles are firing and the shape of the face is changing.

We know it’s not something you read consciously but it adds that extra level of detail and a more natural believability to these characters.

Thanos is more physical than we’ve ever seen him, instead of fighting with the magical abilities of the infinity stones, he fights with a massive double-sided blade. Was capturing Josh Brolin’s movements even more challenging because he was more violently agile than in Infinity War?

Well he’s four years younger than he was in Infinity War due to the time travel from 2014. We wanted him to appear more agile and more destructive. He’s not the philosopher of Infinity War anymore, his express goal is to destroy the universe.

We had to do a lot of keyframe animation on top of the motion capture of his fight sequence performances. Thanos is very tall, so to get his body mechanics, his mass, his scale, his agility, we ended up working off Josh Brolin and motion capture from a stunt performer doing his fight moves. We needed to dial Josh’s performances on top of the stunt performer’s motion capture. So it’s that keyframe animation on top of the motion capture that’s giving us the final product.

I like to think about the beginning of the battle where Thanos is squaring off against Captain America, Thor, and Iron Man. He’s dealt with Iron Man and Thor, and it’s just Cap left. And Thanos starts laying into him with his blade. He’s vicious and he’s fast and ends up chopping out great chunks of Cap’s shield. There’s a lot of keyframe animation happening there to get that sequence to work.

This is part of the overall character animation pipeline with Maya, a commercial 3D animation software, but adding a lot of extra functionality with plugins and Thanos’ animation puppet, something that works really well for the animators.

What were the biggest challenges of portraying Tony’s death? Was it difficult striking a balance between showing a fatal amount of damage without distracting from his last breaths? Or something else more challenging about the scene?

Those are really key parts of the film, and the thing that proved to be the most challenging was finding the right level. The moment where he uses his suit to power the gauntlet so he can snap and rid of the world of Thanos and all his army, but in doing so he subjects himself to the power of the stones. The only reason why he isn’t killed instantly is that his suit is protecting him, but the suit itself is suffering major damage.

We worked up a version of this to show to the Marvel VFX Supervisor, and had done a lot of complex simulation work in a software package called Houdini, to show the nanotech of the suit working to protect Tony, but there was too much going on initially. It was too flashy and showy, and so the filmmakers felt like it was really distracting the audience from what was most important, which was Tony. The way he was sacrificing himself to save the universe. You don’t want to distract the audience from that.

So that took two or three passes to get that right, and everytime we revisited it was a process of going back to our simulation work and tweaking composites to come up with a new look. That turned out to be the last shot we delivered on the show, where Tony does the snap.

When we see him after the snap and suffered all these terrible wounds, that was a digital prosthetic that we’ve applied to the actor’s face, so basically no damage makeup on Tony at all. And that was important because it meant that finding a balance between looking serious enough but not too fatal. It couldn’t be just a scratch but it also couldn’t be a gory horror show, it would distract us from his performance. We got clear notes from the filmmakers that they wanted Tony to retain his diginity from that scene.

We spent a lot of time working it up in concept art. We have our in-house concept art team and we have three artists working on different variations of the damage for a couple of months. And that gave us the opportunity to focus in on what it looked like and send versions to the VFX team and filmmakers and continue to explore the nuances of that scene.

You don’t really know how it’ll read for sure until you build it up in CG. We had a custom hair groom where a chunk of Tony’s hair has been burned from his temple and things like that you don’t know what they’ll look like until it’s in CG.

Can you talk about the effect ‘blip light’, where does it appear in the film and how did you expand it?

Blip light was coined for characters that turn to dust, who are essentially ‘blipping out’ of existence. We had a whole army to blip out in Endgame, so we had to come up with a way to not use the ‘hero blip’ tech on that entire army.

We had approaches for hero blips like Thanos and hero foreground characters, but for mid-ground characters we came up with an optimized version of the hero version, which meant we can run it at a faster turnaround.

We had another solution for the army which was more of a comp-based, volumetric solution to run simulation in real time within Nuke. All the background creatures and Thanos’ minions who ‘blipped out’ were done with Nuke.

For this comp-based background blip effect we used a plugin to Nuke called Eddy which is a volumetric simulation and rendering engine that runs right inside Nuke, allowing for fully CG dust sims to be generated on the fly as part of the comp. Timing for the blips was established by our animation team and then Eddy specialists in our comp department generated the blips themselves.

What was different about the Scarlet Witch (Elizabeth Olsen) in Endgame and how did you make her more powerful?

She is at the peak of her power in Endgame. Every time we see her throughout the Avengers series she has more and more control over her powers. In the Age of Ultron she’s just getting used to it and by Endgame she’s mastered her energy.

The scene where she confronts Thanos and starts to peel the armor off him is the turning point in the battle. He has no other choice but to call backup and signal an attack from above.

It was important that the effects we developed for her ‘space magic’ powers could be grounded in a physical reality, so that it worked in a live-action movie context.

A key to this was making the FX simulations very detailed. The simulations that we used as the building blocks for her powers were a combination of volumetrics for the ‘smoke’ and particles to add detail accents.

Wanda’s powers have a signature look that has been established in previous movies that we needed to reflect in the more amped-up effect that we were creating, a characteristic of the way the details in the smoke are structured. High-res renders of these sims were passed to our compositing team who used them as building blocks to achieve the final effect.

In the last battle scene, many of superhero forces arrive by the portals used by Dr. Strange and his people. How did you go about reworking Dr. Strange’s portals to use them on a more immense scale?

That was a fantastic sequence to work on. We had a huge responsibility because that scene had the potential to be an emotional keystone moment in the movie. I didn’t want average or poor effects to ruin it for the audience. I’ve been able to see the payoff from the audience in the theater and it’s huge.

Dr Strange’s portals are being produced on a huge scale at the same moment, some large and some small. They had to be recognizable as his and we wanted people to instantly know what they are.

And we needed to be able to optimize them to render out many portals at a time that also had CG environments within them. You can see these other worlds from the battle field side of the portal, that was our digital painting department creating a photo realistic version of the shots.

Part of this optimisation involved the FX team creating a set of delivery assets that they could hand over to lighting to execute and run Houdini in the background via Houdini Engine, for further tweaks of our portals.

Improvements to the portals this time included doing multi-segment motion blur on the sparks, and augmenting how the portals are calculated to offer control over spark counts, temperatures and cooling before they go up on the render wall. 


The Current State of the Industry and the Future Impact of AI
The Entertainment Industry: How did we get here and what is the …
A SXSW Guide to Earning Your Access Instead of Paying For It
Do you ever feel like entertainment industry conferences, festivals, or networking events …

Your title is VFX Supervisor and I see some of your responsibilities are listed as project overview, on set, and technology. What are your responsibilities on set?

I’m working with the Marvel on set VFX crew, who are responsible for getting us the shots that we’re going to be working with. I try to be on set for the sequences that will turned over to us at Weta Digital. But it’s a collaborative process. If I think we can shoot things a little differently to make post work go more smoothly, without impacting the directors or the cameras at all, then I’ll talk with the Marvel on set team about it.

I’m also there to soak it all up, not just the action but also listening to the directors discussing where they want to go with the sequence, getting a strong sense of the intent, and how the sequences need to work in the overall storytelling of the movie.

What are your tech responsibilities?

Those responsibilities are more so back at home at Weta Digital. I’m ultimately responsible for determining the technology we use to do the work. I’m working with a huge team at Weta Digital and relying on my CG supervisors and R&D team.

So I’m not actively developing software myself, I’m too busy these days, but it’s what I used to do back in the earlier days. And something I’m still interested in.

Some contents or functionalities here are not available due to your cookie preferences!

This happens because the functionality/content marked as “Google Youtube” uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.


Enjoying the news? Sign up for the Creative COW Newsletter!

Sign up for the Creative COW newsletter and get weekly updates on industry news, forum highlights, jobs, inspirational tutorials, tips, burning questions, and more! Receive bulletins from the largest, longest-running community dedicated to supporting professionals working in film, video, and audio.

Enter your email address, and your first and last name below!

Sign up:

* indicates required

Responses

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy