Thomas Frenkel
Forum Replies Created
-
Hey Coby
[Koby Goldberg] “But there’s more than that.”
…not that much I guess.There is a generic background with particles and streaks that is seperated from the matte particles. My guess would be, that this is just a 2D stock video, that you could either buy or recreate – again with Particle World/Particular/Fractal Noise etc.
Also, the soft lens flare on the left and right border seems not to be part of that video, but positioned on a seperate layer above everything.
Most likely some sort of glow effect was animated on an adjustment layer for all layers below.
-
Thomas Frenkel
October 5, 2016 at 3:34 pm in reply to: Missing effect in the main composition (noob question)A screenshot would help alot. I guess you have some sort of transformation in the main composition, which is why you have to keep the main comp vectorized (with the switch that you mentioned). Usually you have to take this transformation to the Pre-Comp and do it there or keep everything in a single comp. But it depends alot on what you’re trying to achieve. Maybe you can even take your effect to the main comp and add it there.
(On which monitor did you take those AE-Screenshots? Or can you scale the UI of AE? I always wondered…)
-
Thomas Frenkel
September 15, 2016 at 7:19 am in reply to: Remove semitransparent foreground elements – Thoughts on how to approach this [repost from the Adobe Forums]I guess it does work with other blending modes, as long as the user can identify them right. Again you, as the user, have to guess which blend mode was used and input that information for the process. In my example I solved the formula of the normal blending mode to “B=” (the color value of the bottom layer). You can do the same with other blending modes too to get the right formula for each case.
Now here are the restrictions that probably occur with this:
– On many modes like “multiply” or “add” color values are clipped to black or white (at least in 8bpc). If that’s the case they can’t be brought back.
– blend modes that consist of if-else statements can’t be reversed. -
Thomas Frenkel
September 14, 2016 at 1:55 pm in reply to: Remove semitransparent foreground elements – Thoughts on how to approach this [repost from the Adobe Forums]Kalle’s approach doesn’t work, if the watermark or foreground object consists of more than one color. But it is a great start that I didn’t think of, nonetheless. I wanna dive more into the blending mode stuff ;). Maybe I can develop something with Processing, since introducing new blend modes to After Effects/Photoshop seems impossible.
Walter, what you describe is the quantization, that I already mentioned at the Adobe forums (my 4th post). The formula gets increasingly unprecise, the more opaque the watermark is. And of course it’s impossible, when the watermark is at 100%. I guess this is a non-linear thing. In my test with 30% opacity, the value of the green channel was just off by 1/255 and the other colors got reproduced perfectly.
-
Thomas Frenkel
September 13, 2016 at 5:58 pm in reply to: Remove semitransparent foreground elements – Thoughts on how to approach this [repost from the Adobe Forums]“While perhaps plausible in theory, it looks like an exercise in futility to me.
Here’s a viewpoint in a similar vein: back about 20-25 years ago, video in NTSC-land was commonly 640×480 with 8-bit color, not counting the alpha channel. It’s numerically possible to develop an array of every permutation of the color of every pixel.”
Something similar was stated at the Adobe forums. And I want to believe you. But I just don’t understand it technically.
“Here’s an other one: drop some ink into a bowl of swirling water. Since the laws of physics and hydraulics are well-known, it is conceivable to reconstitute the bowl of water and the ink to their original states prior to dropping the ink.”
This sounds like the process I have in mind is either incredible slow (performance-wise) or demands a lot of finetuning by the user. But I can’t see it.
Performance:
As I clarified in my 4th post at the Adobe forums, I don’t want to automate this process with algorithms or something like this. It’s really just a simple blend mode calculation that would do the job:this is the normal blending mode calculation:
If you assume ‘A’ is the top layer, ‘B’ is the lower layer and their alphas are ‘a’ and ‘b’ respectively then the resulting color ‘D’ is:
D=A*a + B*b*(1-a)and this is the calculation I have in mind:
D=(B-A*a)/b*(1-a)Yes, it does have to be calculated for every pixel, but so does the normal blend mode, doesn’t it? I’m not much into coding, but this shouldn’t make a big difference in render times.
User Input:
When you use this new blend calculation, your bottom layer would be the footage with watermark and the top layer would be your recreation of the watermark with this mode active. Now recreating the object, watermark etc can take some time depending on its complexity. But you can live preview it while masking, drawing or changing the opacity and color…roughly or precisely. To me that’d be pretty handy. -
Thanks for the replies, Kalle.
I found a workaround for my problem with Plexus and Linear Wipe to trim the paths. It’s not very elegant, but it’s doing the job.
In Plexus, the contours are not polygon-like shapes, but similar to the Trapcode 3D stroke plugin – and thats what I needed. -
“Connect Layers” will create a rectangle that is 1px high, which makes it appear as a single line, but technically its a “primitive” rectangle shape. But this is not very relevant here, since I also encounter the same problem with simple handdrawn paths.
Sorry, I failed at attaching the image before. Here it is:
And here the script expression on the shape layer, dynamically connecting 2 Nulls in 3D space:
-
The script “Connect Layers” creates an expression on a rectangle shape that dynamically connects two 3D points. However, I can’t copy and paste or link this rectangle to a solid as mask, since it is a specific form shape, defined by width and height.
I used Rowbyte Plexus with decent results, but I can’t animate/trim the path that way, which is neccessary.
Let me clarify the perspective issue:
The stroke width should be affected by the distance to the camera, but not by the rotation towards it. The strokes look like flat 2D images in 3D space, which is not the look I want.And I can’t convert the conneceting 3D Nulls to 2D Space using the .toComp([0,0,0]) – expression with the “Beam” effect, because it will get rid of the perspective.
Comparison attached:

