Forum Replies Created

Page 2 of 11
  • Greg Sage

    June 9, 2019 at 4:20 pm in reply to: Backlit greenscreen

    Definitely interested to hear from anyone who has done a backlit system, or maybe due to their use of some of the materials in another context has insight into elements like:

    1) White or green LED’s: Type, rating, brightness, etc.

    2) Diffusion material: Fire resistance, proximity to lights

    3) Framing: Portable or fixed setups, collapsable frame for transport, rollup led’s, etc. Just a thought here, but if you’re doing a whole wall or near in a house or other place you have to live with the setup, might just be better aesthetics to frame the entire wall floor to ceiling and end to end… even if you only do the led’s in one section. That way it’s fully stealth. You just have a cloth wall 2″ to 6″ in from the actual wall. with no visible frame. Could even paint a design or something on the portion not being lit.

    4) Screen material: White, gray, green, fabric type, resistance to shadows, reflections, etc.

    5) Possibility of serving dual purpose as sound absorption panel (use of mineral wool, fiberglass, or other sound deadening material for diffusion. Just a thought too, but they could coexist. If it is, for instance, not feasible to use mineral wool in front of the led strips, if it was behind them, and they were on a mesh of some sort with 2″ of diffusion in front of them the entire thing could still be both a greenscreen and sound absorption panel… and still be only 6″ deep with a (potentially) white surface.

  • Greg Sage

    June 8, 2019 at 11:22 pm in reply to: Backlit greenscreen

    Never used such a system, but it certainly looks like thery’re just repackaging 3M ScotchLite fabric for an insane markup… and there’s even Chinese knockoffs of that:

    https://www.ebay.com/i/222893880826?chn=ps

    Here’s a whole breakdown on how to do it for dirt cheap:

    https://www.youtube.com/watch?v=rJKLEZnsVsQ

    I think there’s a followup on his channel with more details too.

    Some contents or functionalities here are not available due to your cookie preferences!

    This happens because the functionality/content marked as “Google Youtube” uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.

  • Greg Sage

    June 8, 2019 at 8:15 pm in reply to: Backlit greenscreen

    Yeah, the “light went on” when the idea hit. I’m convinced now it’s a better way to go (for walls, anyway.

    Not sure whether it’s best to use white/gray material or green, but I ran a couple of tests today since I’m in a music studio surrounded by huge mineral wool panels draped in white cloth. Would’ve been great if I could blast enough light through them to be usable as the greenscreen would just be a solid white while not in use, take up literally 4″ of total space (including lights), and double as solid acoustic treatment (the other bane of the existence of vloggers).

    The 4″ mineral wool is too dense to be used for diffusion, but with enough green LED’s, I might be able to use 2″ panels to get the dual greenscreen / acoustic panel thing going.

    Anyone else given this a shot? With proper diffusion, I’m just not seeing any downside. I’m assuming, of course, that there’s some trial and error with materials and light to get the proper dull finish and chroma green color.

    Just came across the retro-reflective / green lens ring approach today while searching this too. Very interesting. Not ideal for larger shoots, but probably the best approach for someone in a tiny room where they can (and should) get right back against the screen while shooting.

  • After spending several days stright re-tracking the same footage dozens of times, it occurs to me that absolutely locking a specific pixel down might require a different workflow. After all, planar tracking attempts to stabilize a general area, not a point. As it stretches and rotates, it necessarily distorts. I assume that objects further from the anchor distort more.

    What if…

    I instead zoomed way in, and tracked that individual item for translation only. For instance, to lock an eye in place, what if I just focussed on getting the eye perfectly motionless without having to worry about what rotation or scaling might do to it.

    Would there then be a way to KEEP that center most pixel (center of the iris in this case) LOCKED in place while the REST of the plane gets rescanned for rotation and scale, and stretched as necessary to stabilize the plane?

    In other words, can I stabilize just the position of a tiny object first, then maybe reorient the footage so it’s at the exact center of the footage or exact center of the tracking area, or whatever it is that Mocha uses to determine the anchor of scaling or rotation. I’m assuming that’s the center of the image since pasting psr data from Mocha to AE would then be processessed according to the anchor which AE defaults to the center of the image.

    Is there a way to do this or achieve similar end in prioritizing a single pixel as the thing that should be absolutely locked in place and never moved in order to stabilize the rest of the plane?

    Is this job better split among other tracking tools? Point tracking for the position then center, and retrack for rotation and scale only in Mocha? Some other combo?

  • Definitely hadn’t thought of that approach. Yeah… Might just be simpler. as they’re always repeating the 1 previous frame. I’d have to create markers anyway (or a list entered into script somehow) to do any other way, so keyframe pair per skip frame is certainly no more work.

  • Got it. Thx. I’ve watched literally dozens of tuts over the past day, and agree that I definitely want to output it as AE mask so I can use all the blending modes, etc. I’ve already done a mockup on a still with extensive layering that gets the exact look I’m after.

    Issues now boil down to how to properly track texture and shape across multiple face planes as described in last post.

  • Mocha Pro in AE 2018.

    I’m applying a face paint shape that covers an area spanning from most of the forehead to part of the side across one eye and onto the cheek.

    Fortunately, the actor isn’t moving much, but he’s talking fast and angrily, so there’s lots of face scrunching.

    Basically, from a planar perspective, it’s organically sprawled across multiple planes.

    So… should I be somehow tracking each area like forehead, top of cheek, temple side, etc separately, and somehow comping the shape together? Otherwise, the constant facial movement would indicate I’d need to constantly be (manually?) keyframing the warp.

    Ultimately, I don’t want to apply it as an insert, but rather track the deformations of the shape and import it as an AE mask.

    Similarly confused about the texture being applied as it moves differently on the cheek vs forehead, for instance, yet is one consistent texture. Is it best to apply a texture separately on multiple planes and feather or similar to create the illusion that they are contiguous?

  • Hmmm.. Maybe.

    It’s a very fast lip sync, so timing is everything. I should’ve clarified too.. it’s never more than one consecutive frame, so at most, I’d be interpolating across a single missing frame at a time. There’s just a number of bad frames randomly scattered throughout the clip. Mainly bad light flashes or sudden jumps that correct in the next frame, so even just holding the previous frame would be a huge improvement.

    Off the top of my head, that might work better, or it might be interpolating a bunch of frames when it’s not really necessary. Can’t say I have an opinion beyond that first impression as I’ve never tried to smooth over a missing frame before. I’d have to think through the stretch thing to work out how many previous frames it should be stretching, etc… and, of course, how to script it.

  • Hadn’t thought of using markers. Not sure how that’s referred to exactly within the IF statement, but sounds like something that could be looked up.

    thx

  • Hmmm. OK. Sounds about like what I’ve gathered from some tuts, but none of them have dealt with the texture part, so still not sure how to get the texture to warp to match the mask.

    Also, not quite clear on best sequence to displace a shape to face and then track. For instance, if someone has a star painted on thier face, it will first be displaced to match their skull shape, and then tracked.

    So, if I am doing that by drawing a star shaped mask in mocha, I need a skull warped star first to draw the mask. Should I just use the luminance values as a displacement mask to get the warped star shape, then trace it to create mask in Mocha, then let the star shaped mask track?

Page 2 of 11

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy