December 10, 2021 at 7:09 pm
As a rule of thumb, a larger scene is safer in C4D than smaller. There is no real scale per se in any 3D package, in theory the 3D world is infinitely big or small – but – when making a small scene, as you have found, you have to start using numbers down in the .0001 range. The problem here is that C4D’s math can only do so many decimal places before its starts rounding off. The large scale cut off (64bits) is farther away than the small scale cut off (a “quintillion” I think for large scale and less than half that for small)
Even though you are trying to make a miniature scene, there’s no reason to actually make it miniature. Look at any “tilt” lens camera shot (I’ve added one below) – it can make even an industrial port where the scale is mind boggling look like a miniature diorama due to DOF – in other words, the camera doesn’t care how big or small the scene is. A 28 mm lens will photograph a large scene or small scene the same, you will just have to move the camera closer to the small scene. (In the real world this isn’t quite true due to lens aberrations but it’s a close enough statement for “perfect” 3D cameras). Just be mindful that as you scale your scene, your depth of field needs to scale too.
So if you have a city block you are filming, your depth of field “window” might lets say start 10 feet in front of the subject and end 20 feet behind. Everything between is in focus (more or less), for a set film-back and aperture. Now make that city block a scale model that fits on a table top and use the exact same camera with the same settings – because you have had to move the camera in close to frame the shot the same way, your depth of field is now reduced so you will need a smaller aperture to keep that same “scaled” focal range. Think of it this way – because you scaled the world you also need to make the hole in the lens smaller too. This is why we can shoot a full sized scene and then put a miniature in the shot seamlessly. As long as you adjust that iris, the camera doesn’t care.
As for multi pass renderings and final-ing out of C4D, the big shops never do the latter. Running an FX house is tough financially – unless your pipeline is super efficient you will go the way of R&H or GVFX or DD (before it was saved) and a host of others. And we all realized that doing test after test in 3D was a terribly inefficient way to get a final look (and terrible for the director sitting in the next chair). So everything goes to compositing in layers. Its so much faster to tweak the shot in real time there than in a slow render engine. You can even change the lighting!
Now things have changed a bit with these new IPR render engines (RedShift, Arnold, Octane etc) so you can get much closer in 3D faster, but still not in real time and not at full rez. So the multipass pipeline is still very valid. However, unlike the physical render engine these third party renderers can do depth of field very fast and on multiple cores or GPU cores – instead of 64 CPU buckets lighting up on a 4k shot, it does my heart good when the whole screen is filled with CUDA core buckets all crunching at the same time. Priceless!
There is nothing wrong with the quality of the physical render engine, but there is a terrible price to pay in the speed (although you can do the pos pass in the standard render engine too and then use all your cores) Although the motion blur in the standard renderer doesn’t hold a candle to the motion blur in the physical.
But even here, we never use a 3D motion blur – it just takes too long. (Back in the day Electric Image had a real-time motion blur in 3D but the guy who wrote the code left the company and they never figured out how he did it so that science has been lost to time). You can render out a vector pass and use that in post to make your motion blur in real time, in the same way we are doing DOF in your example.
December 13, 2021 at 11:28 am
Man, this is the kind of empirical wisdom I wish I would always get when I come here with an issue! Clearly you’ve been through these 3D existential doubts before and to get a glimpse of your workflow is much more valuable than just getting a quick solution or workaround for one specific issue.
I don’t do much photo-realistic work (just play around with it for my own amusement), so I’ve yet to be able to justify the expenditure of getting Octane, Redshift, etc. But every time I watch a tut that uses them I want them!
This weekend I had a lull in my work, so I decided to go ahead and update to C4D r25. As I feared, they’ve changed things around quite a bit! So that’ll take me a couple of weeks of slower workflow before I get the hand of stuff. But I’m hoping there are some improvements.
December 17, 2021 at 12:01 am
Steve, can I further pick your brain? What do you do in terms of Pos Pass when you use tele cameras? When working with a sort of isomorphic aesthetic, I find that the only thing that gives me control is to use a camera from 135mm up. And when I really need to crank it up in order to get a very flat effect, there’s no way I can get the Pos Pass to work correctly…
December 17, 2021 at 7:40 am
Hey, sorry – been on a deadline before we shut down for Covid again.
Will get you that AE file shortly.
As for lens lengths – the longer the lens the larger the aperture values or less DOF you can use to get results. But you can get extreme DOF with any length lens (these are virtual lenses after all so you can break physical rules). Setting the aperture to 0.1 or lower is not unheard of to get good bokeh with a sub 100mm lens.
There is no real world lens with that kind of F number (I think 0.95 holds the record right now), but there’s no reason you can’t make the math in 3D work for you.
So that’s from a true DOF work flow.
For POS passes, don’t forget you can use a non perspective camera, there are Parallel cameras, isometric cameras (probably this one) and diometric cameras selectable in the object tab of the camera object. They can be a little weird to work with because no matter how close you get them to the object the objects don’t get any bigger. Its all about the zoom in the view port. (in Arnold they are called Ortho cameras)
There should be no difference in the POS value because that’s a spatial calculation separate from the lens. But to be fair it’s never come up – I’ve just used the lens I need (and because motion picture film/sensors have a smaller frame size it’s rare for you get into long lenses – a 150mm would only be used for extreme telephoto – the crop factor, depending on sensor size or say a 4 perf film frame, that would be equivalent to about a 250 or 300mm on a 35mm SLR. If it’s one of those kinds of shots – the battlefield kind of zoom in to find the hero – you are cheating the focus all over the place so a POS pass wouldn’t even be rendered ).
I will give a long lens a test with POS and see what happens.
December 17, 2021 at 8:16 am
I just thought of something re your blooming edges – the POS pass must be rendered without alpha.
And, are you working in a 16 bit or 32 bit AE project? These two items are essential to hold all the data from the EXR.
And (I’m just doing this from memory looking for things it might be that slip people up), don’t you have interpret the POS pass footage in AE as having no color management or setting it 32 bit linear?
Also in the POS pass settings, you should not be using the world space but camera space – this way the Z depth recorded in the Blue record of the EXR is always away from the camera even if the camera is pointing left or right or up or down.
Log in to reply.