A couple of things to keep in mind. If the scene is large it may not fit in the GPU ram and then it just won’t render, so back you go to CPU rendering which usually has more machine ram to use and disk swap space – in theory, it’s unlimited.
If you pole the guys on the web who use C4D professionally I think you will find a lot of threadrippers – so that means more and faster CPU cores. Now to be fair, those who can afford threadrippers generally have a few GPU’s chained together too. And while they are doing a huge amount of their rendering on GPU they are still relying on that threadripper a significant amount. And that can’t be a coincidence (or a waste of money). There are also limits to how many GPU’s you can tie together efficiently.
Depending on your render engine you may also find that due to sampling and customizing the render engine for the task, the CPU engine will run faster, even though it has fewer buckets to throw at it. And check the denoiser you might be tempted to use – it may engage the CPU even at the end of a GPU run. The denoiser can take longer than the render.
Some of the third party engines are both GPU and CPU but a few of them aren’t
really mature yet in the GPU category. In the trade GPU is often used
for look/dev and then the file is thrown against a bunch of CPUs for
final output.
The standard and Physical engines both use the CPU so again, seeing lots of buckets will depend on how many threads your CPU has. And there is nothing wrong with these engines. They have gotten a bad rep because they don’t have a progressive mode that is so helpful in look/dev. But they are excellent render engines and can render stuff realistically if you know what you are doing.
There are also some issues with flavors of GPU’s and depending on platform (like Mac) you may be stuck using CPU for what you need (and this also depends on what render engine you pick). There’s also compatibility issues with open GL versions and Open CL. A modern Nvidia should have no issues, but an older one or an AMD might run you up against a brick wall on the GPU side of things. Not saying they will, it’s just your chances are higher for an incompatibility issue. Also consider what your compositing engine uses – might as well have a machine that does both well.
Coding for iterative and recursive sims (like evolving particles) is really hard on a GPU at the moment so I could guess that Indsydium is CPU weighted – the render might not need to be but the simming part will be I’ll bet.
Ironically, the answer usually comes down to budget rather than what is going to be best. Threadrippers aren’t cheap and multiple GPUs can set you back as well. Building the “ideal”machine can break the bank so you often need to shoot for a cheaper “sweet spot”. Consider also smaller cheaper machines on a network that can work on the frames you are working on simultaneously to give you faster response times. More, cheaper CPU’s spread around can often be cheaper per core than one big monster.
Let us know what you end up with.
(I love the “faster single core GPU” from Maxon. By “single”, do you suppose they mean “thousands”? I think you are right in your assumption there)