Forum Replies Created
-
Unfortunately there are not a lot of resources for info on this kind of gig… It’s a special little niche.
You are in a great place now, and I definitely advise performing a search in this forum for screen blending.
You might also search at the livedesignonline.com website for articles about or describing screen blending. I know I’ve written one or two over there, so you might just search for Bob Bonniol over there, and see what turns up.
You really nailed most of the basic rules in your description (avoiding black and white solids for instance).
In a really macro design sense remember this: Your composition is serving a scenic purpose. Seeing a screen like that is NOT like watching something on TV. It’s like being in the room with it. It’s got to support anything else that’s going on (Is it part of a trade show booth, or stage set, for instance ?)… It’s the loudest voice in the room visually, so you have to exercise discipline in how loud you get and what you say, so to speak… SLOW always looks freakin fabulous when you are LARGE… slow moves can have lots of grandeur… Fast can frequently inspire anxiety, excitement, or nausea. Use cautiously. Where as in doing broadcast or even corporate communications type video, I would avoid wipes in transitions (like the plague), I find them to be big, pleasing, and architectural when you use them on a screen that big.
Using After Effects 3D capabilities can have a huge payoff on a screen like this. It suggests depth and space that will feel very real. Using layers at varying z depths in After Effects, Using light layers in After Effects to light layers as if they were actual objects on the stage. Using small, slooooooow shifting camera moves across scenic content layers in AE can be a big win.
This is all big broad stroke advice… Forgive me if any of it seems rudimentary.
Bob
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
Ryan,
Propresenter is not bad software, but I find that when you dig into it, it’s not capable of presenting native pixel res (it scales a smaller output stream). If a Vista Montage has been specified, it’s evident that there are some high end matrixing, scaling, blending, windowing requirements that Pro-Presenter is going to struggle with (at least even in the sense of tactile interface).
But as I said, it’s a great product to get a LOT done on a smaller budget.
But after all the discussion of preserving native pixel res at the creation stage of this thread, it would be a shame to throw it away on the playback/scaling side.
IMHO…
BobMODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
OK,
So your original full native pixel res (3500×1050) is a rectangle roughly expressed as a 10 (wide) by 3 (high) (or 10:3 as a divisible aspect ratio). HD is native at 1920 x 1080 or as After Effects see’s it 32:27 (let’s just say around 16:9). That 10:3 ratio rectangle is WAY wider. So horizontally speaking when you squish it into an anamorphic frame you are interpolating pixels into less pixels in the width axis. The Montage is then “making up” information when it interpolates those pixels back out to the natural width.
It makes things that are vector based look jaggy, it makes color potentially go weird, it can make things with lots of horizontal motion go weird.
It would be better to playback off 3 HD machines, splitting the comp into 3 parts that fit neatly (and WITHOUT anamorphising) into HD (1920 x 1080) (sort of as described in the previous post).
The montage then stitches these back together, and no pixels are lost.
That post also brought up an interesting question. Blending or not. Will the 3 projectors be used with their rasters butted up, or will projector blending be used ? When blending, it is good to plan on using 15% to 20% of the raster on either side to be used to blend the rasters.
I’ve pounded stuff bigger than your 3500×1050 through AE, both on my quad G5 and my MacBook Pro, and it’s really not necessarily a problem. Unless the gig is like, next week, in which case if you are just discovering what your blended raster is going to be, then you are in for a tough two weeks.
I hope not !
Bob
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
Understanding that the Montage is doing the scaling, what is your playback device ?
If it’s an HD based device (say a Doremi V1HD), than it is understandable why you might go with anamorphic HD output from AE. If it is a resolution independant media server (Hippo, Pandoras), then I would recommend a reduced resolution that is consistent with the real aspect ratio, but reduced enough to make the pipeline manageable. The Montage is already going to be scaling, but you risk visible artifacting in having it perform the re-shaping as well.
Also, just because the projectors are capable of 1400 x 1050 doesn’t mean the staging company intends to run them at native res. Possible or even probable, but not certain. Make sure to check first.
Also, 3500 x 1050 isn’t really crippling if you don’t want to waste any pixels. I’ve gone all the way up to 5k+ on blended rastors.
Good luck.
Bob
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
In general, when you are just going unswitched to record (referred to as “iso”) you do it in a device per camera scenario. There are several applications on PC for making a windows box into an out and out high level DDR. This allows for up to uncompressed capture. But this begs the question: why are you skipping tape ? That’s the oldest form of “iso” and it works well. Conversely you could iso direct to high end decks as well.
Then, yes, you probably could capture live with an NLE, but that depends a great deal on how robust your system config is.
But let me stress this: I mentioned one device per camera. Unless you plan on encoding and storing the live videos in a codec consistent with internet delivery or mobile devices.. And then you’d have to have an app that reads and encodes all inputs simultaneous.
Can’t help you there…
In my expperience it’s a computer per camera.
Bob
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
Ummmm. There are SO many variables on what this could be, it’s impossible to answer your question.
I would STRONGLY suggest that you hire a professional AV company to do this gig, watch what they do carefully, keep and catalog all documentation of the gig, then begin to think about how you might do this yourself in the future.
Beyond that, in an ethical sense, there are people who make their entire living specifying the schematic and logistical design for what you are talking about. It’s NOT simple. It’s NOT something you can learn quickly and apply successfully. There are so many details, pieces of equipment, personnel required, planning, and execution techniques… People go through college or extensive industry experience gathering to get to a position where they do this.
I’m not trying to be demoralizing. It’s not impossible you can learn and apply this stuff after a while. But with your gig SO close, you really ought to get some professional help so it doesn’t end in disaster.
Good Luck,
Bob Bonniol
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
Trapcode’s Echo Space makes use of many of these attributes via the AE 3D compositing engine, and allows for good control over the positioning and geometrics of the many layers required to create that same look.
Instead of taking one layer and splitting it into cards, Echo Space deals with independent layers, so the precomping required to get the unified look will be the tough thing.
Good Luck,
Bob Bonniol
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
This definitely sounds like something you want to do utilizing a 3D program to handle the earth cracking thing, and a compositing app (AE of course!) to handle the volumetric light and transitional elements.
Bob Bonniol
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
This is SO dependent on your playback system. The res of your master comp is dictated by two factors: What are the output resolutions of your display devices (projectors at, say, 1024 x 768 ? Or HD plasmas at 1920×1080 ? some other variant), and the playback device. Multi screen presentations need to be played back by devices that can maintain frame accurate sync. Some of these sorts of devices (say Doremi v1HD) output standard broadcast resolutions, and are controlled via show control systems like Medialon Manager to make sure they are running in perfect frame sync. In the case like that, you create a master comp (as outlined in the previous post from Dave: This comp needs to be twice the width of you output files, you do all the motion between screens in this comp; then you render 2 output files each formatted for the size of the output device res). In other cases, the output device may be one of the advanced multi-screen playback servers, such as Pandora’s Box, Hippotizer, Wings Platinum, or Watchout. In this case you actually render out the full size comp, and the playback device does the heavy lifting of keeping things frame sync, and splitting the master comp into the appropriate pieces.
As you can see, success here is only assured by knowing the information about your display systems. I can’t tell you how many times I’ve had a client hand me the WRONG sort of media for this (my studio specializes in big multi-screen live event stuff), something they have had commisiones, and then I have to essentially recreate or re-render it on the spot to make it playable. FIND OUT who is providing the playback/display system and talk to them ASAP.
Good Luck,
Bob Bonniol
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader -
Maku,
You have to admit you answered rather tartly… Remember, these forums are here to dispense help and knowledge.
On to your methodology. Frame by frame rotoscopy is prone to problematic mask jitter, even with advanced variable Matte softening techniques, none of which are inherent in AE.
Actually there ARE short cuts, and they benefit the art of rotoscopy in general, by allowing the user to define critical positional points in footage, and allowing the intervening frames to be mathematically calculated and tweened by the plug or app. As already described below, Roto from Silhouette FX and Motor/Mocha are both great solutions for this.
Realistically you still have to go back and revisit every 7th or 8th frame some times, in kinetic shots, but it really reduces jitter, and achieves great smooth solid mattes.
I’ve used Motor (when it was Alpha version) and it’s big brother Mokey to Rotoscope vast amounts of old Sinatra footage for the Live in London show mounted 2 years ago. Motor literally allowed that production to happen on time and on budget. We also used Silhouette on that show with great results.
SO know there are better ways… Finding true, zen infused roto Artistes these days is getting tough.
-Bob Bonniol
MODE Studios
http://www.modestudios.com
Contributing Editor, Entertainment Design Magazine
Art of the Edit Forum Leader
Live & Stage Event Forum Leader
HD Forum Leader