Forum Replies Created

Page 9 of 13
  • Tenchi writes:
    > No, i have all on default C: (clean install) all “temps”
    > for AE & AP is drive c:

    That’s not the recommended setup. The AE cache should be
    on a separate fast device, preferably an SSD.

    I also have a separate 64GB SSD for the Windows paging file.

    > The 5 sek. is not a second run (of cached files)
    > i renders two compositions.

    For this benchmark, it’s just the main render time that’s relevant.

    Ian.

    SGI Guru

  • Tenchi writes:
    > now the time is:
    > 1 min. 29 sek

    That’s better, although still slower than what I would have
    expected from three Titans.

    > + 5 Sek.

    What is this second time for? No point running the test
    twice, it will all just be cached data.

    > So HDD/SSD influence this test.

    Hmm, maybe I should try again then, I was writing to
    a mechanical drive RAID1.

    > BTW: nice card the MSI Xtreme Lightning i had this card
    > too, i loved it!

    Indeed. 😀 I’ve obtained five of them so far. One thing
    though, they’re a pain to install when there’s more than
    two. The card is more like 2.1 slots wide, not 2 slots.
    I’ve had to use spacing pads to keep them apart, otherwise
    the fan blades clash.

    I don’t quite understand your 2nd time; do you have a
    separate SSD for the AE cache? You should do.

    Ian.

    SGI Guru

  • Teddy writes:
    > Ian, I have a new suite of total benchmarks, not just GPU. Would love to see
    > your results on these. …

    I’m still using the normal CS6 11.0.4; am I right in assuming your new suite needs AE CC?

    > I am no longer supporting this outdated benchmark, although of course you are free to use it.

    Thanks! It has certainly been useful, though I’ve been working with someone
    on creating something a lot more complicated and better able to exploit multiple
    GPUs: one frame takes about 10 minutes to compute with three 580s, while the full
    animation takes many hours to render (tomshardware will be using the scene file
    for their CUDA tests when it’s ready).

    > PS. What on earth are you using 4 GTX 580s for? Bitcoin mining?

    Mainly research into performance issues with AE and other computational
    benchmarking experimentation. It’s a clone of a system I built for someone
    a year ago, though better setup in some ways with lessons learned. Here’s
    an up to date CPU-Z:

    https://valid.x86.fr/r9ibvb

    The CPU oc isn’t finished yet though, haven’t done the final tweaks or evaluated
    the max speed (it was set to 4.7 with the old cooler, a Phanteks PH-TC14PE; new
    cooler, now in a different case, is a Corsair H110).

    Ian.

    SGI Guru

  • Tenchi, am I reading that correctly? 2 mins and 3 seconds? Strange, I thought
    it would be a lot quicker with three Titans…

    Anyway, I’ve finished upgrading my 3930K system, it now has four identical
    MSI GTX 580 3GB Lightning Xtreme cards; at stock core speed of 832MHz I get
    1 min and 40 secs:

    Oc’ing the cards doesn’t help much, eg. at 900MHz the time only drops to 1m 35s.

    Ian.

    SGI Guru

  • Thanks for the result!! I’m looking forward to hearing how
    the test scales with the extra Titans. Also, can you run the
    ArionBench test aswell? Both CPU & gfx? Would be interesting
    to see how CPU performance differs to the XEON 2697 once you
    have it installed.

    My system has the older ASU P9X79 WS (3930K @ 4.7, 4x GTX 580
    3GB). How have you found your newer E-WS board in terms of
    setup and usage? Any issues?

    Ian.

    SGI Guru

  • And remember to clear all memory & disk caches before
    running the test. Also make sure RayTrace3D is turned on.

    Ian.

    SGI Guru

  • Most likely it’s being cached somehow, or it’s rendering in Classic 3D Mode, something
    like that. Just go through the settings, make sure each is as it should be, and of
    course ensure all caches are cleared before starting the test (media, disk & RAM).

    Ian.

    SGI Guru

  • Steven Andrus writes:
    > Dual Xeon e5-2690 at 2.9ghz (speed step kicks it up to 3.2 or something

    Yeah, does that on my Dell T7500, usually stays at one BIN above the baseline.

    > I think but I didnt see the render really hit the cpus at all so i think

    It’ll barely touch the CPUs at all, being a CUDA test.

    Real-world datasets can hammer the CPU(s) aswell at times, but not this
    test, it’s pretty simple & repetitive.

    > Single asus titan 6gb

    Just FYI, a 780Ti should be quicker.

    > Honestly we should be running this at 4k though. I’ll look into
    > converting the file into 4k and link it if I do.

    Not sure it’s worth doing. All it would do is quadruple all the running
    times, except systems where RAM on the GPU suddenly becomes an issue and
    they’d be even slower, even though such cards aren’t fit for 4K anyway.
    Running it at 4K wouldn’t reveal anything new.

    This test is interesting as a means of testing one narrow performance
    aspect of AE (namely CUDA on a small dataset that doesn’t hit CPUs, RAM
    or I/O), but for me it’s thrown up questions for which I can’t find
    answers, eg. is it possible to force AE to use multiple GPUs round-robin
    for frame rendering? AFAIK atm the app always tries to use all available
    GPUs at the same time for every frame, which often scales very badly
    indeed (extremely badly in some cases). Look at my multi-580 results for
    a good example (see earlier posts): with more than 2 cards, the
    exploitation percentage of each GPU drops off sharply, so four 580s is
    barely any better than three. Performance would be much better if the
    frames could be rendered 1-frame-per-GPU, so with 4 GPUs the first GPU
    would render frames 1, 5, 9, etc. I can’t see any setting for this in the
    Settings panel though.

    The other question is, does AE ever make use of 64bit CUDA? That’s the
    only real advantage of Titan. Are you running the Titan in 64bit mode?
    If not, try it in 64bit mode, see what happens, though that would
    probably only reveal whether this particular tests gains from 64bit mode,
    not whether AE uses it in general, and if so then to what degree. If AE
    doesn’t need 64bit CUDA, then (except for the lack of ECC RAM) the best
    value CUDA card atm for AE is the 780 Ti, unless somehow one is running
    up again the card’s 3GB RAM limit. Most likely though some vendors will
    eventually release 6GB 780Ti models.

    The final question is whether AE would benefit from the full speed PCIe
    return paths found in Tesla cards, and the better GPU cache structure &
    other additional features. Is AE even coded to make use of these? Who
    knows – there’s nothing on the Adobte site about this.

    A friend of mine is working on a more real-world dataset, a 30 second
    animation which atm takes about 2 hours with a couple of 580s. It hammers
    the whole system and so is a good general test, including system stability.
    Not ready yet though.

    Ian.

    SGI Guru

  • Sweet!! 8)

    Note that I doubt the PCI 2 vs. 3 config makes any difference
    for this particular test.

    So, now all we need is a result from someone who has one
    or more 780 Tis (should in theory be quicker than Titans).

    Ian.

    SGI Guru

  • Excellent!! Now we’re talking! 8) Is that tri-Titan time
    done with the Titans running at stock speeds?

    Ian.

    SGI Guru

Page 9 of 13

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy