Forum Replies Created

Page 1 of 5
  • Evan Seitz

    July 29, 2018 at 12:42 pm in reply to: Controlling a User Data Slider via Xpresso

    Fascinating, thank you so much for elucidating such nuances! I would never have reasoned priority was passed in such a way. Along with fixing my problem, this really helps my understanding of C4D’s logical structure; couldn’t be more pleased 🙂

  • Evan Seitz

    July 27, 2018 at 8:50 pm in reply to: Controlling a User Data Slider via Xpresso

    Actually I’m still getting a weird result… my object is taking on random values supplied (between 0->20), but not the actual random numbers shown in the slider (also between 0->20). For example, the slider goes 0->20 with the object as open as possible at 20 and as closed as possible at 0. The slider will show a new value on each new frame, but this value does not match the actual width of the opening seen. Easy to see, hard to explain – so I’ve attached my scene file in hopes that someone knows whats going on here.

    The reason I need this to be consistent is that I’ll eventually need the actual random numbers used over the shown frames printed to a list, and if I take the ones currently being shown, they won’t match the actual geometry changes happening in the scene.

    12584_eo1decoupledforum.c4d.zip

  • Evan Seitz

    July 27, 2018 at 7:52 pm in reply to: Controlling a User Data Slider via Xpresso

    edit: “0,20 input going to 0,20% output”

  • Evan Seitz

    July 27, 2018 at 7:51 pm in reply to: Controlling a User Data Slider via Xpresso

    Never mind, your logical perturbation led me to the answer – I’ve changed the mapper from 0,20 input to 0,20% and that worked! Thank you!!

  • Evan Seitz

    July 27, 2018 at 7:44 pm in reply to: Controlling a User Data Slider via Xpresso

    That mapper is set up as follows, as to make the output values match the slider values (0-20%):

  • That did it! The boundary box defining the Focal Distance of the camera needed to fully contain the geometry for it to render. Amazing help, thank you so much!

  • Thanks so much Brian – I’ve got the render times down much lower now. Still not quite getting the stacked volume look though, but have posted my progress above if you have time for any thoughts.

  • I’ve been experimenting all day, but still can’t quite get it right. The alpha channels in your last file weren’t adding right on my end. Sticking with the volume shader idea, I’ve changed my object to an array of of cloned spheres of radius 1. In the attached file, if you render from the camera labelled SIDE, there should be a much brighter center (16 points go across the central diameter) than the very top and bottom (only 1 point at the peak at each) – but these seem like the same intensity. Ugh, anything you can think of given this setup?

    12538_test3.c4d.zip

  • Thank you for such a detailed response – it’s given me a lot of tools to consider. I’ve attempted the volumetric shader on my object – changing the geometry to a matrix (which is fine for what I’ll be using this for). However, the output looks very faint and I’m a little concerned about render times. I’ve attached the file here if you’d like to take a look.

    12534_test.c4d.zip

  • I think this image explains it best – here we have a 3d object (input) and a black and white screen (output). The output image in (A) corresponds to the input object being viewed from the top down.

    The thicker regions (more volume touched by a straight line passing through from camera to output screen) of the 3d object have a whiter pixel value, while regions that have less (or no volume) have a darker/black pixel value.

    I think that no external lights should be used in such an example, and only the camera should be sufficient, as if it were sending out individual volume sensors in straight lines, and simply collecting the output of each line’s scanning on a screen.

Page 1 of 5

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy