Forum Replies Created

Page 1 of 11
  • So in screen capture method you’re talking about… am I understanding correctly that the screen capture is then the painted version of the face over a black background?

    Of course, that’s assuming I’m using a face it can track, and that I can find some way to do the side bit. It’s really just the nose and the alpha edge being used from the side, though, so I could just paint the nose separately, and use the alpha edge for the comping.

    Just being able to paint the front face would get me halfway there.

  • I find the layout of this forum quite confusing. Dbl post here was entirely unnecessary. Why is there no edit button?

  • I have a weird situation (part of why I was trying to do in AE to begin with.)

    I have a 4 minute clip of a cubist face I’ve composited from side and front shots of the same talking head in front of a green screen. Similar to this:

    But it’s video, and he’s talking. Both shots are already motion stabilized, and I’ve already comped it together in AE. The cubist comping of the 2 shots requires warping each shot to get them to fit together correctly.

    Resulting cubist comp may or may not be recognized as a face by facial tracking softwares, but it would be perferable to track it as such since that means textures and paint I overlay would help to hide the warps where, for instance, nose doesn’t quite move correctly with cheek as they’re comped together.

    If necessary, I could track the front and side shots separately, paint them separately, then do the comping again from the painted versions of each, but then the warps would be more noticeable as they’d be baked into the comping.

    Bottom line: I can either track and paint the comped cubist version (preferred path), or (if necessary) track the front and side shots separately (using either original footage or my pre-stabilized versions), paint them separately, then comp them together later. Depends really on whether software will allow me to track comped version.

    So… would I then take the rest frame into PS, and paint it, then use that as the texture?

    EDIT: Just thought of something. Maybe I’m looking at this all wrong, and there may be a third approach. The comping was a major pain because of all the skewing where even a slight turn in one direction translates differently in the front vs side shot. Since the head is being painted anyway, it might just be better to create a single talking 3d head driven by the front shot’s tracking data (and possibly texture information from side shot if it’s helpful for the head wrap)… then just render out the head mesh with the texture from both front and side angles and do the comping from that. In theory, it should be much more stable.

    Speaking of stable: Trying to think through… where exactly in this workflow (or the one you mention with screen grab) would I lock down a particular tracking point if I want, for instance, to keep the left eye socket perfectly motionless so all motion translates from the center of the eye socket (not the pupil since it may move)?

  • Unless I’m missing something, mesh warper would seem to need manual keyframes each time facial geometry shifts. Seems better suited to mapping flat things onto surfaces that are warped but stable like a cylinder.

    I had somehow missed the second vid. That sure looks a lot closer, and was more or less what I was trying to do yesterday, but got confused trying to do the multiple corner pin morph across planes bit.

    Looking into Nuke now.

  • I have a weird situation (part of why I was trying to do in AE to begin with.)

    I have a 4 minute clip of a cubist face I’ve composited from side and front shots of the same talking head in front of a green screen. Similar to this:

    But it’s video, and he’s talking. Both shots are already motion stabilized, and I’ve already comped it together in AE. The cubist comping of the 2 shots requires warping each shot to get them to fit together correctly.

    Resulting cubist comp may or may not be recognized as a face by facial tracking softwares, but it would be perferable to track it as such since that means textures and paint I overlay would help to hide the warps where, for instance, nose doesn’t quite move correctly with cheek as they’re comped together.

    If necessary, I could track the front and side shots separately, paint them separately, then do the comping again from the painted versions of each, but then the warps would be more noticeable as they’d be baked into the comping.

    Bottom line: I can either track and paint the comped cubist version (preferred path), or (if necessary) track the front and side shots separately (using either original footage or my pre-stabilized versions), paint them separately, then comp them together later. Depends really on whether software will allow me to track comped version.

    So… would I then take the rest frame into PS, and paint it, then use that as the texture?

  • Understood. Unfortunately, I need to get this out in 48 hours now, and it’s turning out to be a much bigger issue than anticipated.

    Looks like Nuke might do it, but I don’t know that I have enough time to iron out the new workflow issues, and looking at Spark and Lens Studio, not sure I can do anything with what it produces.

    Simplest (good enough) solution since I’m already familiar with AE would seem to be just using facial tracking data to move puppet pushpins or similar where I take a stillframe of the rest pose, paint it, then use the moving tracked pins to warp the painted version. This would also allow me to make a much larger resolution image by doing the whole thing at 4x the footage’s actual resolution of 1280×720 since I’m replacing the actual face texture anyway, and only using luma values which I was going to do a basic beauty blur on anyway.

    To that end, I feel like I’m missing something in all the AE face tracking tuts. Even on my straight ahead face that never turns and is well lit in front of a greenscreen, it’s tracking data is garbage. None of the tuts seem to have this issue, and I see very few controls. Is there some way to manually set all the pins first, and then have it track them?

    Some other 2d (without creating a 3d face mesh) facial tracker that works better and can export it’s tracking data into AE? Some other hybrid workflow including Lens, Spark, Crazytalk, or similar?

  • Yeah. That would do it. Seems like overkill workflow in my case, though.

    I’ve got a side and front shot of the same talking head , but it’s all low res 720p footage, and I’ll likely never need to do this again. Shots don’t even move much, so any fully automated tools would likely work fine.

    Any simplified versions or other approaches that would provide “good enough” results in my simplified case?

  • I read this just after stumbling across Spark AR.

    I got super excited about spark until I realized they had probably crippled export capabilities… or at least limited output resolutions, etc as those wouldn’t be necessary features for it’s intended use.

    I was just about to download it. Maybe I shouldn’t waste the time, though. Just how limeted (or non-existant) is the export? Is any of the data or footage able to be exported in any way? I see a few details in the Getting started docs, but not sure if they are referring to the final output of the filters as employed on FB, or if they are referring to the downloaded development tool.

  • Basically, is there anything that does this:

    https://www.banuba.com/facearsdk

    as an AE plugin, or standalone that can allow me to paint on a face so it displaces to match the facial geometry, and tracks with the various facial deformations?

  • Tracking the lowest point of an alpha edge of a piece of footage and making sure it lines up at a particular angle with an existing curve on a mask. Probably too hard to explain properly, and not mine to share screen grabs.

    Halfway done now, but it’s a major pain obviously. Doing all the general stuff like hitting extreme changes first, checking halfway between… but it still ends up being nearly per frame to get the accuracy I’m looking for.

    Definitely digging into all possible details of all types of trackers to minimize the need to do anything like this in the future.

Page 1 of 11

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy