Forum Replies Created

Page 12 of 13
  • Teddy Gage writes:
    > Ian, Quadro cards are not faster than consumer gaming cards. …

    Nope, that’s wrong. Whether it’s the hw or the drivers,
    simple fact is performance for most pro apps is by far
    and a way much quicker with a Quadro. Otherwise, care
    to explain my Viewperf results?

    https://sgidepot.co.uk/misc/viewperf.txt

    > In fact many are more or less the exact same hardware
    > as much older gaming cards. I think these benchmarks,
    > being a raw test of computing power, prove this.

    Not necessarily much older, they just have a lot fewer
    cores (deliberately so).

    > What you are paying for in the quadro cards is:
    >
    > – better drivers

    Exactly, and optimised drivers too, which leads to
    better performance for pro apps, because games use functions
    which pro apps don’t need at all, and vice versa. The sw
    optimisations are criticial. For similar reasons, Quadro
    cards are terrible for gaming.

    > but they are definitely not “faster”.

    My results prove otherwise. I’ve seen this argument rage
    so much on different forums, but the numbers don’t lie.

    > IMO the GTX Titan is actually the sweet spot between CUDA

    Titan is dreadfully overpriced and deliberately crippled,
    just like the 780. Both cards could easily be massively
    faster than they are, if given a quicker mem bus (512bit),
    but NVIDIA won’t do that because they know it would eat into
    Tesla sales.

    > … If you can find one, the GTX 580 is also a great deal.

    See my earlier post, I have four. 😀

    > … People believe they are faster because they have
    > paid thousands of dollars for the quadro name.

    😀

    One of my Quadro 600s only cost me 25 UKP. It’s 50% faster
    than a GTX 580 for Catia/Lightwave, about the same for Maya,
    more than 2X faster for SW, 4X faster for SNX, 5X faster for
    ProE (which is CPU-bound anyway), and 14X faster for TCVis.
    A quadro 4K leaves the 580 in the dust.

    Ensight is the exception. Gamer cards do well for this in
    raw performance terms.

    Ian.

    SGI Guru

  • Paul, don’t worry, that’s perfectly normal for a couple of Quadro 4000s.
    Testing one Quadro 4000 with my 5GHz 2700K gave 17 mins 53 secs, but this
    dropped to only 8 mins 35s with the addition of just one GTX 460. Quadro
    cards are much faster than gamer cards for most pro apps (Ensight being
    the exception) because of optimised drivers, etc., but they don’t have
    that many cores for CUDA.

    Ian.

    SGI Guru

  • What do you mean slow? Instead of using Classic3D (which
    does not produce a comparable result anyway), switch the
    processing to CPU-only and see how long it takes – then
    you’ll see slow. 😀

    Remember this is supposed to be a GPU test. I don’t see
    the relevance of discussing Classic3D results.

    NOTE: in time, it’s likely your image links will no longer
    work. I recommend including text in your post to summarise
    the processing times. Don’t rely on image inclusions.

    Ian.

    SGI Guru

  • Ian Mapleson

    June 17, 2013 at 11:35 am in reply to: New After Effects PC

    Hey Mark, how did you get on in the end? Did it all go ok?
    I hope so!

    Ian.

    SGI Guru

  • My goof!! Brain not engaged – saw your gfx listed as a 680,
    didn’t realise you were a different poster. 😀

    Apologies…

    Ian.

    SGI Guru

  • You’ve already posted the GPU result? I think that just
    highlights my point even more. 😀 Posting a Classic3D
    time aswell is just going to confuse people. Kinda
    meaningless too since it doesn’t result in the same output.

    Ian.

    SGI Guru

  • Check the title of the thread – it’s GPU accelerated results
    that people are expecting to be posted here, ie. a CUDA test.
    Classic3D uses the main CPU.

    Ian.

    SGI Guru

  • That doesn’t sound right – 78s with a single 680?? An earlier
    post gave more like 7 mins for one 680 card. 78s is like 2X
    faster than a Titan.

    Can you post more details of your system please? Perhaps a
    screenshot from GPU Shark? Or a CPU-Z submission? If you’re
    somehow getting magic speed from a 680, I’m sure others would
    love to know how.

    Ian.

    SGI Guru

  • Teddy Gage writes:
    > You’re insane! …

    In today’s world I’ll take that as a compliment. 😀

    These are the four cards I bought btw:

    https://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=151052818493
    https://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=130916895940
    https://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=200925198261
    https://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=171044128930

    Total cost: 530.40 UKP. Reasonably good value I reckon; a little more
    than half the cost of a Titan yet quite a bit quicker even with just
    3 cards. Power consumption probably sucks of course (not checked yet),
    but then that’s the tradeoff between cheaper multi-old-used vs. fewer
    expensive new. However, I only bought these for AE/CUDA research and
    general 3D benchmarking, so power consumption doesn’t really matter atm.

    I also won a 3GB GTX 580 which I’ll be sending to someone to upgrade
    the AE system I built for them back in Feb (see this thread).

    > … Nice results on the 3x gtx 580, …

    Thanks!!

    > … that’s the fastest render recorded so far. …

    It is? I’m surprised. Nobody here with two Titans? That ought to
    beat three 580s. Speaking of multiple Titans, have a look at this:

    https://www.randomcontrol.com/arionbench

    Anyone know what kind of systems they’re using which can hold that
    many GPUs? Or are they using water cooling so as to only use single
    slots? Either way, talk about OTT…

    > … but I’m surprised to see the gains offered by a third card
    > are pretty modest. …

    Doesn’t surprise me TBH, I’ve seen this effect before. Of course one
    shouldn’t expect it to be more than a 3rd better anyway, but just like
    going from 2-way to 3-way SLI, the gains are often less due to the extra
    overhead processing required. Indeed, for some types of render in AE
    (those involving a lot of particles, or scenes that are not so optimally
    constructed), it’s possible for one GPU to render faster than 2+ GPUs
    (bad GPU thrashing occurs).

    Games show similar effects – unless the drivers have game-specific
    optimisations, often 3-way SLI can be slower or more erratic than 2-way,
    and even when 3-way does work ok if one then jumps to 4-way SLI the effect
    can be abysmal. Experimenting with the different SLI rendering modes then
    becomes necessary, which is a pain. At least rendering in AE doesn’t need
    SLI mode to be active. Similar effects plague the use of CF for games.

    > … Although if it comes to rendering long projects it could be handy, …

    That’s true, on a long render the speedup will be significant; useful for
    looming deadlines, etc. 8)

    Since a system can’t really be used while a render is in progress, I
    reckon the optimal setup would be one system designed for strong
    interactive performance (single Titan or whatever), plus a separate
    system with as many powerful GPUs as possible, eg. Asrock X79 Extreme11
    with seven water-cooled 1-slot 3GB 580s (or Titans) would be good, but
    not cheap. 😀

    Of course that doesn’t help CPU-limited tasks like Classic3D render.
    Stepping up from a well oc’d 3930K is tricky; multi-socket is costly,
    while a compatible 8-core XEON for a 1-socket board has a much lower
    base clock and thus less oc potential (3930K is probably faster overall).
    Hmm, anyone know of a good quad-socket board? I doubt those offer much
    in the way of oc’ing functionality though.

    > … I think 2x GTX 580 SLI is best price point to performance ratio.

    Note that SLI mode is not necessary for AE. I tested using two GTX 280s
    using a different scene (takes about 5.5 mins), render time was only
    0.004% different for SLI vs. no-SLI.

    Ian.

    SGI Guru

  • Using one to three GTX 580 1.5GB cards, all set to 800/2010/1600 core/RAM/shader
    (these cards will run at over 900 no problem, but not on this mbd, there isn’t
    enough room for proper cooling):

    3x 580/800: 2 mins 55 secs
    2x 580/800: 3 mins 31 secs
    1x 580/800: 5 mins 36 secs

    System:

    ASUS Maxiumus IV Extreme
    i7 2700K @ 5.0GHz
    Thermalright Venomous-X with 2x Coolermaster Blademaster fans
    32GB DDR3/2133 CL9 (GSkill TridentX 2400 4x8GB kit)
    1kW Thermaltake Toughpower PSU

    CPU-Z: https://valid.canardpc.com/2829919

    I’ll test with 4x 580 later, using two other motherboards with different
    CPUs: ASUS P9X79 WS + 3930K, and an ASUS P7P55 WS Supercomputer + i7 870.

    I’ll also be testing with one to four GTX 460s, and retesting with 3x 580
    using a different board which will permit better cooling and thus the cores
    increased to about 900 or so (Asrock X58 Extreme6 with a XEON X5570).

    Ian.

    SGI Guru

Page 12 of 13

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy